Your Feelings, Their Profit: How AI Misreads Your Emotions and Sells Them to the Highest Bidder
As humans, we tend to navigate the world through emotion: quietly, instinctively, and sometimes unconsciously. What are emotions, if not the very fabric of how we live in the world? They’re how we feel, of course, but are also how we communicate, often without even realizing it. They drive our decisions: in relationships, in politics and in marketplaces. They connect us to each other and shape how we understand ourselves. But emotions are also deeply personal. While our faces might betray a flicker of joy or sadness, only we know the full story; the nuanced reasons why we feel what we feel and the quiet calculations behind every reaction.
But what if someone told you that wasn’t true? That your feelings aren’t yours alone? That in a world still grappling with what it means to be a person, there’s now an AI technology that claims to know you better than you might even know yourself. Not just for your benefit, but for the benefit of whoever is willing to pay for it—whether that be your boss, your insurer, or your government. This is the promise (and the threat) of emotion recognition technology (ERT).
For AI to claim it can read emotions though, it needs something solid to go on. It requires a reliable link between what we feel and how we show it. After all, AI can’t literally read our minds (yet). ERT depends on the idea that certain expressions and tones of voice map neatly onto specific feelings; that a smile means joy or a raised voice means anger.
The nature of emotional expression
So, is the expression of human emotion universal? Back in 1967, psychologist Paul Ekman sought to answer this question. To do so, Ekman brought a set of flashcards to the isolated peoples of Papua New Guinea to test whether they recognized a display of core expressions (i.e., wrath, sadness, fear, and joy). Despite the language barrier, their responses often matched Ekman’s expectations: a sad face prompted a story about a man who lost his son, while a fearful one produced tale about a dangerous boar.
Ekman’s studies were seen as ground-breaking at the time. 50 years later, however, neuroscientist Lia Feldman Barrett conducted a systematic review of the subject and found no reliable evidence that one could accurately predict someone’s emotional state through their facial expression. Despite this, the multi-billion-dollar emotion recognition technology (ERT) industry is rapidly growing and becoming integrated into widely used platforms and services, especially those by Big Tech.
The business of recognizing emotions
Using biometric data (unique characteristics used to identify an individual), ERT assigns emotional states based on facial expressions, body cues, vocal patterns, and eye movement. Advocates of ERT point to its potential positive impact across a wide array of fields, such as in healthcare to prioritize care; business to develop marketing techniques and monitor employees; and law enforcement to detect and prevent crime. Critics of ERT argue that the negative potential of this technology is likely to far outweigh the good. While issues that have been raised are varied, most fall within three categories of concern: privacy, accuracy, and control.
The issue of privacy
The COVID-19 pandemic accelerated a digital transformation that was already well underway. As workplaces, schools, healthcare, and social lives shifted online, technology became more deeply embedded in our daily routines. In parallel, tech giants consolidated unprecedented levels of power through the accumulation of both vast troves of personal data and market share. Surveillance practices once seen as exceptional have become normalized, often under the guise of public safety or productivity. This new landscape set the stage for more invasive technologies, including ERT, to quietly enter the mainstream.
As early as 2021, the UN High Commissioner for Human Rights warned of ERT’s threat to privacy rights. As with any AI system, ERT encourages the large‑scale collection of personal data. However, it is even more contentious because it processes biometric data, which is categorised as “sensitive” under regulations like the EU GDPR, Quebec’s Bill 64, and Canada’s PIPEDA. Unlike standard data collection, ERT draws inferences about our inner states (our thoughts, feelings, and intentions) making it arguably more intrusive. Some privacy activists argue it undermines freedom of thought. In many cases, ERT systems purport to extract sensitive insights such as political beliefs or mental health, which could then be used to influence access to healthcare, employment, insurance, or financial services.
But these concerns are no longer theoretical. In 2024, Network Rail in the UK secretly tested Amazon’s Rekognition software across several train stations, using it to scan passengers for any emotional response and their demographic traits, all without public consent. The company says the purpose of this data collection was to review customer satisfaction and to “maximise advertising and retail revenue.” The public backlash quickly prompted the UK Information Commissioner to launch a review into the legality of the pilot.
Meanwhile, the EU has taken decisive legislative action. In July 2024, the final version of the EU Artificial Intelligence Act was published. The Act explicitly bans the use of emotion recognition systems in workplaces and educational institutions, citing their high risk to privacy, equality, and human dignity. Limited exceptions exist for medical or safety-related uses, but even those require strict oversight.
In the wake of the pandemic, as governments and corporations increasingly turn to AI systems to interpret and predict human behaviour, it’s necessary to assess what is being lost in the process. ERT exemplifies a broader shift: from surveillance of what we do to surveillance of who we are and how we feel. The question now isn’t just whether we’re being watched, but whether our inner lives are being mined, misinterpreted, and sold to the highest bidder.
The inaccuracy of ERT
Due to the subjective nature of emotional expression, ERT is a type of AI that is particularly prone to producing inaccurate results. What one person displays as anger might be concentration to another. This variability undermines the idea that fixed expressions map neatly onto fixed emotions. According to a 2011 study on cultural diversity in expression, East Asians and Western Caucasians differ in terms of what features they associate with an angry or happy face. The study also found disparities between the two groups in emotional responses based on authority presence. The implication here is that emotions are complicated and contextual. It would be scientifically unsound to equate a specific facial configuration with a specific emotion across populations.
ERT is also deeply susceptible to bias. As data scientist Cathy O’Neil has emphasized, algorithms do not eliminate bias—they often entrench it. Built on historical data and trained to replicate existing patterns, these systems tend to automate the status quo, including the systemic inequalities embedded within. This is an example of the principle known in computer science as garbage in, garbage out, which is the idea that flawed or biased input data will inevitably produce flawed or biased outcomes. In the context of ERT, if the training data reflects skewed representations then the algorithm will learn and reinforce those distortions. AI models rely on statistical patterns and set training data, and they treat these biased inputs as truth. The result is that societal prejudices are consolidated into systems that marketed as neutral or objective. In this way, AI does not transcend human bias; it replicates it at scale, behind the veneer of technological sophistication.
These issues have already been observed in practice. A 2022, audit of ERT within three services (Amazon Rekognition, Face++ and Microsoft) found stark racial disparities, each being more likely to assign negative emotions to Black subjects. Another study in 2024 on leading LMFMs (like GPT‑4o, CLIP, Gemini, etc.) found anger was misclassified as disgust twice as often in Black women as compared to white women.
The subjectivity of expression and bias of algorithms calls ERT’s accuracy into question: if it cannot promise correct results, is the technology still valuable?
ERT as a form of public surveillance
For public safety and law enforcement, ERT offers the government (and any other entities with public control) a tool that promises aid in threat prevention. In times of war, uncertainty, or political unrest when the public is more willing to cede freedom in exchange for security, governments could justify the use of such tools for mass surveillance to recognize and prevent crime. A 2023 study warned that police adoption of emotional-AI will enhance proactive surveillance and lead to real-time behavioral profiling with minimal oversight. While government adoption of ERT or any other facial recognition technology remains controversial in most public spaces, it is increasingly normalized in areas of heightened security, such as borders and airports. IBorderCtrlm, for example, is a smart border control project that uses ERT to produce a “risk score” for travellers seeking entry. This project is part of a broader push toward automated border control. As of 2025, twelve EU countries have started piloting smart systems to flag “security-relevant behaviors” in transit zones and asylum centers, oftentimes without consent. Critics warn that this form of ERT may behave as an unregulated lie detector—scanning every face in public spaces and attaching subjective “risk” tags to travelers. This type of project often operates with little to no public transparency, disproportionately impacting migrants and other marginalized groups who are subjected to opaque, automated judgment without recourse or oversight.
Governments could also use this technology to prevent unwanted behaviour normally protected in most democratic societies, including public protests. In some countries, these worries have already materialized. In 2020, a Dutch company sold ERT to public security bodies in China. This technology has purportedly been used by the government to tighten control over the already heavily monitored Uyghur people of Xinjiang. The risk that ERT poses to personal freedoms and democratic values remains a glaring issue—one that calls for government response.
AI regulation is long and winding road
Despite warning from experts, Canada’s current data and privacy legislation does little to address the risks posed by ERT. From the unsubstantiated scientific basis it is premised on to its use of sensitive biometric data, ERT is uniquely dangerous. Some critics, including civil society organizations and the European Disability Forum have called for its complete prohibition. In the alternative to an outright ban, the results of ERT should at the very least be considered with a healthy degree of skepticism until there is meaningful scientific consensus that facial expressions can reliably infer emotional states across individuals, cultures, and contexts.
But even in a world where technology could accurately decode human emotions—would it be desirable? Any technology that purports to detect our internal states inevitably threatens to infringe on our privacy, bodily autonomy, and psychological integrity. And like most emerging technologies, our grasp of its consequences lags far behind its emergence. That is why it is of paramount importance that experts, lawmakers, and the public engage in open, transparent, and critical dialogue about its future. A clear, proactive and comprehensive regulatory framework for ERT is necessary to safeguard democratic freedoms. Some advocates have even argued that the development and use of emotion recognition should be subject to the same scientific standards we apply to pharmaceuticals: no deployment without rigorous, peer-reviewed evidence of efficacy and safety. Combining such scientific scrutiny with strict regulation of biometric data could help mitigate the worst harms.
Canada’s current approach to AI regulation is far from ideal. Rather than proactively safeguarding human rights in the face of powerful emerging technologies, the new federal government has embraced a different mindset: prioritizing economic gain and global competitiveness over privacy, accountability, and democratic oversight. After the federal government’s Artificial Intelligence and Data Act quietly died in Parliament earlier this year, AI Minster Evan Solomon confirmed that it would not be reintroduced wholesale. The government’s new stated approach to AI regulation will prioritize economic competitiveness and intellectual property, rather than consumer safety and privacy rights.
This is disappointing, but it also offers an opportunity. In the absence of outdated or compromised legislation, Canada has a chance to build something better from the ground up: a regulatory framework that does not merely replicate the EU’s defensive posture but actively prevents harmful technologies like ERT from ever being normalized. For that to happen, though, public pressure is essential. If governments are to respond to the real dangers posed by emotion recognition, they must first hear loudly and clearly from the people most at risk.




Start the discussion!