Disclosure: The views and opinions expressed here are solely those of the author and do not necessarily represent the views and opinions of crypto.news editorial.
The concentration of artificial intelligence (artificial intelligence) development efforts in the hands of a few powerful companies raises significant concerns about individual and social privacy.
With the ability to capture screenshots, record keystrokes, and monitor users at all times through computer vision, these companies have unprecedented access to our personal lives and sensitive information.
Like it or not, your private data is in the hands of hundreds, if not thousands, of businesses. There are tools on the market that allow anyone to check how many companies they own. For most people, this number is a few hundred. The situation is getting worse with the rise of artificial intelligence.
Companies around the world are implementing OpenAI technology into their software, and everything you enter is processed by OpenAI’s central servers. Moreover, OpenAI’s security staff is also leaving the company.
When you download an app like Facebook, almost 80% of your data can be collected. This may include your habits and hobbies, behavior, sexual orientation, biometric data and much more.
Why are companies collecting all this information?
Simply put, it can be quite lucrative. For example, consider an eCommerce company that wants more sales. If they don’t have detailed data about their customers, they will need to rely on broad, untargeted marketing campaigns.
But let’s say they have rich data profiles about customers’ demographics, interests, past purchases, and online behavior. In this case, they can use AI to deliver hyper-targeted ads and product recommendations that significantly increase sales.
As artificial intelligence penetrates every aspect of our lives, from advertising to social media, from banking to healthcare, the risk of sensitive information being disclosed or misused increases. That’s why we need stealth AI.
Data dilemma
Consider the huge amounts of personal data we entrust to tech giants like Google and OpenAI every day. Every search query, every email, every interaction with AI assistants; it’s all logged and analyzed. Their business model is simple: your data is fed into advanced algorithms to target ads, recommend content, and keep you engaged with their platform.
So what happens when you take this to the extreme? Many of us interact so closely with AI that it knows our deepest thoughts, fears, and desires. You’ve given it everything about yourself, and now it can simulate your behavior with uncanny accuracy. Tech giants can use this to manipulate you into buying products, voting a certain way, or even acting against your own interests.
This is the danger of centralized AI. When a handful of companies control the data and algorithms, they have tremendous power over our lives. They can shape our reality without us even realizing it.
A better future for data and AI
The answer to these privacy concerns lies in rethinking the underlying layer of how data is stored and computed. By building systems with security and privacy at the core, we can create a better future for data and AI that respects individual rights and protects sensitive information. One such solution is decentralized, non-logging, private AI powered by hidden virtual machines (VMs). Private VMs play an important role in ensuring data confidentiality during AI processing. These VMs are designed to securely process and store sensitive data using hardware-based trusted execution environments to prevent unauthorized access and data breaches.
Features such as secure hardware isolation, encryption in transit and at rest, secure boot processes, and trusted execution environments (TEEs) help protect the confidentiality and integrity of data. By leveraging these technologies, businesses can ensure that user data is protected throughout the AI processing pipeline without compromising privacy.
With this approach, you retain full control over your data. You can choose what to share and with whom. Achieving truly private and secure AI is a complex challenge that requires innovative solutions. Although decentralized systems are promising, only a handful of projects are actively working to solve this problem. LibertAI, a project I contribute to alongside startups like Morpheus, can explore advanced encryption techniques and decentralized architectures to ensure data remains encrypted and under user control throughout the AI processing pipeline. These efforts represent important steps towards realizing the potential of stealth AI.
The potential applications of stealth AI are vast. In healthcare, large-scale studies on sensitive medical data can be enabled without compromising patient privacy. Researchers can extract information from millions of records while ensuring individual data remains secure.
In finance, stealth AI can help detect fraud and money laundering without revealing personal financial information. Banks can share data and collaborate on AI models without fear of leaks or breaches. And this is just the beginning. From personalized education to targeted advertising, stealth AI can unlock a world of possibilities while keeping privacy at the forefront. In the Web3 world, autonomous agents can hold private keys and conduct transactions directly on the blockchain.
challenges
Of course, realizing the full potential of stealth AI will not be easy. There are technical challenges to overcome, such as ensuring the integrity of encrypted data and preventing leaks during processing.
There are also regulatory hurdles to navigation. Laws around data privacy and AI are still evolving, and companies will need to tread carefully to remain compliant. GDPR in Europe and HIPAA in the US are just two examples of the complex legal landscape.
But perhaps the biggest challenge is trust. For private artificial intelligence to become widespread, people need to believe that their data will be truly safe. This will require not only technological solutions, but also transparency and clear communication from the companies behind them.
road ahead
Despite the challenges, the future of stealth AI looks bright. As more and more industries realize the importance of data privacy, the demand for secure AI solutions will also increase.
Companies that can deliver on the promise of latent AI will have a huge competitive advantage. They will be able to tap into vast amounts of data that were previously off-limits due to privacy concerns. And they will be able to do this with the trust and confidence of their users.
But it’s not just about job opportunities. It’s about building an AI ecosystem that puts people first. A right that respects privacy as a fundamental right, not as an afterthought.
As we hurtle towards a future increasingly driven by AI, confidential AI may be the key to unlocking its full potential while keeping our data safe. This is a win-win situation that we cannot ignore.
Jonathan Schemoul
Jonathan Schemoul is a technology entrepreneur, CEO of Twentysix Cloud, aleph.im, and founding member of LibertAI. He is a senior blockchain and AI developer specializing in scalable decentralized technologies for decentralized cloud computing, IoT, financial systems and web3, gaming and AI. Jonathan is also an advisor to major French financial institutions and businesses such as Ubisoft, and manages and supports regional innovations.