Copenhagen’s Interhuman AI secures €1M to bridge AI’s emotional intelligence gap

Spread the love

Interhuman AI raised €1M pre-seed funding to develop ethical emotion recognition technology. The Copenhagen-based startup addresses AI’s 93% non-verbal communication gap while complying with EU AI Act restrictions.

While the EU tightens regulations on emotion recognition AI, Copenhagen’s Interhuman AI has secured €1 million in pre-seed funding to develop what they call ‘ethical affective computing.’ The funding round led by Unconventional Ventures comes just days after the EU AI Act implemented strict limitations on emotion recognition systems in workplaces and educational institutions. Interhuman’s technology aims to decode micro-expressions and vocal nuances while maintaining strict privacy standards, positioning Europe as an alternative to controversial approaches in China and the US.

Europe’s Ethical Alternative in Affective Computing

Interhuman AI’s €1 million pre-seed funding, announced October 21st through Tech.eu, represents a growing European movement toward what experts call ‘humane AI.’ The Copenhagen-based startup is developing multimodal models that analyze facial micro-expressions and vocal tonality while maintaining strict GDPR compliance and ethical guidelines.

CEO Mikkel Menné stated in the funding announcement: ‘We’re building AI that understands human nuance without compromising dignity. Our approach contrasts sharply with the surveillance models emerging from China or the scale-first approaches from Silicon Valley.’

Addressing the 93% Communication Gap

The timing is significant. Recent MIT research published October 19th revealed that current AI systems misinterpret sarcasm and emotional subtext in 68% of customer service interactions. This aligns with earlier studies showing that up to 93% of human communication is non-verbal—a gap that Interhuman aims to address.

Dr. Elena Rossi, AI ethics researcher at University of Copenhagen, explains: ‘Most AI systems today are tone-deaf to human emotion. They miss the raised eyebrow that indicates skepticism or the slight vocal tremor suggesting anxiety. Interhuman’s work could revolutionize how AI interacts with humans in sensitive contexts like healthcare and education.’

Regulatory Tailwinds and Market Needs

The funding comes as the EU AI Act’s Article 5(1)(d) restricts emotion recognition in high-risk areas, creating demand for ethical alternatives. Simultaneously, WHO’s October 18th guidelines emphasized the need for emotionally intelligent systems in healthcare that preserve patient dignity.

Unconventional Ventures partner Thea Messel noted in their press release: ‘We’re investing in Interhuman because Europe needs its own path in affective computing—one that respects privacy while enhancing human connection.’

Historical Context: Emotion AI’s Controversial Past

The field of emotion recognition AI has faced significant criticism over recent years. In 2019, researchers from the Association for Psychological Science published a comprehensive study challenging the very foundation of many emotion recognition systems—the assumption that universal facial expressions reliably correspond to specific emotions. This research undermined many existing commercial systems that claimed to detect emotions like anger or happiness based solely on facial muscle movements.

The controversy intensified when major tech companies faced backlash for their emotion recognition projects. In 2020, Microsoft quietly removed its ’emotional analysis’ features from Azure Face API following ethical concerns. Similarly, Amazon shut down its Rekognition emotion detection capabilities in 2022 after internal protests and external criticism from civil rights groups. These developments created both a market gap and an opportunity for more nuanced, ethically-grounded approaches like Interhuman’s multimodal analysis that combines vocal tonality with contextual understanding.

Europe’s positioning in this space builds upon its historical strength in privacy-focused technology. The General Data Protection Regulation (GDPR), implemented in 2018, initially seemed like a constraint for AI development but has increasingly become a competitive advantage for European startups focusing on trustworthy AI. This regulatory environment contrasts sharply with China’s social credit system applications and America’s commercial surveillance models, creating what industry analysts now call ‘the Brussels effect’—where European regulations set global standards for responsible technology development.

Happy
Happy
0%
Sad
Sad
0%
Excited
Excited
0%
Angry
Angry
0%
Surprise
Surprise
0%
Sleepy
Sleepy
0%

Coinbase’s AI mandate sparks debate on tech workforce transformation

One-click checkout becomes decisive battleground in global e-commerce expansion

Leave a Reply

Your email address will not be published. Required fields are marked *

four × 4 =