Can digital assistants with artificial intelligence be used as agents of undue influence? Siri, Alexa, Cortana, Now, M and Bixby help us to send messages, play music, and set reminders. We can even find ourselves in artificial relationships with our AI companions. We can joke with them as if they were thinking, feeling beings, but they are not.

Digital assistants are programmed to learn and adapt; with every interaction, they gather data – and then they relay that data back to Apple, Amazon, Microsoft, Google, Facebook or Samsung. Could it be that George Orwell’s predictions have come true, but, instead of being watched by Big Brother, we are being overheard by Little Sister?

University of Bath AI researcher Joanna Bryson warns that the feeling that these digital assistants have joined the family may give us a false sense of security, because they are all actually spies, reporting information back to their controllers. Bryson says that she modifies her conversation if she knows that there is a digital assistant in the environment.

Email systems also collect information. For instance, in the UK, online shopping service Ocado uses Google’s TensorFlow algorithm to analyze customers’ messages. The results of this analysis are immediate, and we see them everywhere – we are painfully aware of the online advertising services that try to sell us variants of items we have recently bought, or even just considered buying.

Google and Facebook collect reams of data, but assure us that it is anonymized. The problem is that we have to trust everyone who has access to that data. Massive failures in security have led to data leaks from Barclays, Verizon, Wells Fargo, HSBC and, most recently, Equifax. If our “secure” banking services can be hacked, then we should all be a little suspicious about the safe use of our information by the huge corporations that hoover it up – and the on-line contracts we sign, usually without bothering to read them.

Should we hand over our privacy to artificial intelligence? Stephen Hawking has said that “the development of full artificial intelligence could spell the end of the human race.” Elon Musk warns that AI poses “vastly more risk than North Korea.” Hopefully, we will establish ethical controls before the Terminators are released among us, but we must not sit by silently while big business scoops up every last detail of our medical and bank records, and our shopping and browsing habits, let alone our most private and personal conversations.

The military developed much of the first AI research. Siri, for instance, was intended to help soldiers before she was fitted in iPhones and Macs. These systems were initially developed as weapons, and they can still be used as weapons.

AI also has a place in the brave new world of politics. It is very likely that both the recent US presidential campaign and the UK’s Brexit were both influenced by AI applications that scrutinize social media. Online bots were used in those propaganda wars – and at every step, they were pretending to be human.

Not only do these digital assistants report to their programmers – ostensibly to improve services – but they can also be hacked. Your digital assistant might not only be reporting your every word to the hackers, but, in the wrong circumstances, those hackers may be able to cause havoc in your home, steal your identity, and empty your bank accounts.

In a recent piece in New Scientist, Bristol University researcher Nello Cristianini points out: “We have happily accepted incredible intrusions into our privacy for nearly two decades. Now we live in a world where our own personal information is used and traded and mined for value. We should ask questions about where we want to draw the line.”

It is time to ask those questions.