The dystopian era is just getting started. By the end of 2017 we’ll be feeding on content synthesized to mimic real people , leaving us in a sea of disinformation powered by AI and machine learning. The media, giant tech corporations and citizens already struggle to discern fact from fiction. And as this technology is democratized it will be even more prevalent.
While most hearables focus on biometric smarts or language translation, a new earpiece is aiming to notify users which voices are humans ones and which ones aren’t.
DT R&D prototyped a device worn on the ear and connected to a neural net trained on real and synthetic voices called Anti AI AI. The device notifies the wearer when a synthetic voice is detected and cools the skin using a thermoelectric plate to alert the wearer the voice they are hearing was synthesised: by a cold, lifeless machine.
How does it work?
Thermal Feedback: The device gives the wearer a unique sensation that matched what they were experiencing when a synthetic voice is detected. Common feedback mechanisms use light, sound or vibration to alert users. By using a 4x4mm thermoelectric Peltier plate it’s able to create a noticeable chill on the skin near the back of the neck without drawing too much current.
To tie the hardware together DT developed several independent components. The neural network formed the backbone of the system, remotely classifying audio data being streamed in via a phone (more about the network below). All training was done in an offline step prior to running our classifier. One improvement could be to continually improve the network based on new input it receives ‘in the wild’, but it was a little beyond their five day time frame in which they developed this hearable. The iPhone app provided the glue between the IOT wearable and the neural network. This ended up being a rather simple app, doing as little processing as possible. The architecture of the whole system is shown below.