The hottest attraction at this year’s SXSW conference in Austin was undoubtedly Westworld – a saloon town built on the outskirts of the city in homage to the HBO television show. The show, based on a Michael Crichton book and set in the near future, is set in a theme park where people pay top dollar to fulfil their wildest fantasies with the help of a cast of lifelike robots – androids.
While conference attendees donned cowboy hats and sank Old Fashioneds at a wooden bar, we were propelled into the future through a series of fascinating talks and interviews with some of the biggest and brightest names in science and technology. Here are three over-arching themes that emerged from the conference that could give clues about future trends for engineering.
We’re more connected than ever. Most of us carry small devices that can link us up with the sum of human knowledge in an instant. But there’s still a bottleneck, and it’s one that many at the conference were keen to solve.
In a surprise Q+A, entrepreneur Elon Musk called it the ‘bandwidth’ problem – although technology has massively increased the input capacity of our brains, our ability to interact with all that data is largely limited to just two things: our thumbs. It’s as if we’ve installed a broadband connection for downloads, but are still using Morse Code to send out message ourselves.
Start-up Neurable are just one of many companies at SXSW working on a solution. Their CEO Ramses Alcaide demonstrated their technology, which allows you to interact with objects in a video game in VR using only your mind. It’s built on a technology well known to psychologists called EEG (electroencephalogram), which scans brain waves using wet electrodes that sit on the scalp and are coated in a conductive gel.
Alcaide said his company are working on shrinking the technology into something much less intrusive that can be worn in or over the ear – when that engineering challenge is solved, it could allow us to control our smartphones with just our minds.
The case of Neurable highlights another interesting development. Although their demo is impressive, the physical technology that it uses is decades old (albeit in a much sleeker package). The real advances have been in the software – machine learning algorithms have made the arduous task of identifying the brain waves that correlate with intentions much quicker, and automatic.
That was a theme throughout the conference – although some are still focussed on physical engineering, many start-ups are seeking to bring new functionality to existing devices. It could soon get easier – a new start-up has created an AI for building AI.
Normally, when training an AI, data scientists have to tweak parameters and fiddle with settings until they get something that works for them. The purpose of Ople.ai, says founder Pedro Alves, is to automate some of that so that they can focus on coming up with new ways to use data in their business. This could be particularly relevant for mid-sized engineering firms that are collecting a lot of data, but don’t have the resources to hire a huge data science team.
As we’re propelled into the future by fascinating science and engineering, the public can sometimes be left behind – thrust into a world they don’t recognise. That’s particularly true in the automotive space, where self-driving cars are fast becoming a reality in some American cities.
If you call an Uber in Pittsburgh for example (something which in itself would have been considered science-fiction not long ago), you are sometimes offered a ride in a self-driving car, one of a fleet of Volvo SUVs that the company is trialling.
Although there is still a human operator behind the wheel at the moment for safety and regulatory reasons, the car does all the work. In a fascinating presentation, Uber’s senior product designer Molly Nix talked about how the company is building up trust between passengers and self-driving vehicles.
Each driverless Uber has a large touchscreen in the back of the vehicle. Passengers are in control of when their journey starts – they initiate things by tapping ‘Let’s begin,’ on the touchscreen, which then switches to a visualisation of what the car sees. This graphic is different to what Uber’s software designers work on and has been especially designed for passengers – it highlights the objects that the car’s AI has detected, giving the rider confidence.
There’s still work to do, though. Nix pointed out that the hardest part of the journey for driverless taxis is in the ‘seams’ – getting in and out of the car, or adjusting to unexpected problems like the rider not being in the right place. She breaks the problem of building trust down into three areas: transparency, control and comfort.
The last explains why our driverless cars won’t be talking to us for a while. Nix explained that although a voice assistant such as Siri or Alexa built into a car would be a totally separate AI to the one controlling it, if the voice AI made mistakes or didn’t understand something, it would erode trust in the self-driving AI and passengers would feel less safe.