Crowd in building lobby

Congressional Exhibition on Advanced AI, Feb 2025

David Turturean

David Turturean

Phone-line Attacks - Lead

Gatlen Culp

Gatlen Culp

Phone-line Attacks

Alek Westover

Alek Westover

Strategic Deception - Lead

Alice Blair

Alice Blair

Strategic Deception

MAIA members traveled to DC to attend the Congressional Exhibition on Advanced AI (hosted by the Center for AI Policy or CAIP, supported by Congressman Bill Foster of Illinois) in February 2025 to showcase the potential risks of AI misuse to congressional staffers.

Our team presented two demonstrations highlighting critical AI safety concerns: automated phone-line attacks that could enable mass social engineering, and strategic deception capabilities in advanced AI systems that pose significant alignment challenges.

MAIA Team at the Congressional Exhibition on Advanced AI

MAIA team at the Congressional Exhibition (Feb 2025)

Opening remarks from Representative Bill Foster

Wide view of the Intro Speech at the Exhibition

Wide view of the Intro Speech

Video walkthrough of the Exhibition

Panoramic video of the Exhibition

Introductory speech from the Center for AI Policy (CAIP)

Scroll to see more media

(More media to be released when CAIP releases their footage)


Targeted Phone-line Attacks: Automated Social Engineering & Manipulation using Public Information

Advanced AI-driven voice emulation represents a significant security threat when combined with automated calling systems. Our demonstration showcases how malicious actors could leverage state-of-the-art text-to-speech (TTS) technology and Large Language Models (LLMs) to generate synthetic voices capable of conducting human-like conversations at scale.

This proof-of-concept system demonstrates the full attack chain: from scraping publicly available business data across targeted areas, to aggregating context from sources like Google and Yelp, to conducting real-time adaptive conversations. The platform can automatically place calls to businesses and government offices, highlighting the urgent need for protective measures against such potential attacks.

Audio Demonstrations

Listen to examples of AI-generated voice calls that demonstrate the capabilities and potential risks of this technology:

Emergency Services Demo

Demonstration of potential attacks on emergency service lines

Technical Capabilities

  • Real-Time Voice Synthesis: Integration of ElevenLabs' text-to-speech API with OpenAI's models enables instant, dynamic voice generation with human-like qualities.
  • Automated Intelligence Gathering: Sophisticated backend system that builds detailed target profiles by aggregating data from multiple public sources.
  • Scalable Infrastructure: End-to-end pipeline utilizing Twilio for mass deployment of convincing automated calls.
  • Conversation Monitoring: Real-time transcript analysis for adaptive dialogue management and interaction optimization.

Social Engineering Risks

  • Communication Channel Flooding: Potential for thousands of automated calls to overwhelm businesses, financial institutions, or government offices across targeted areas.
  • Public Perception Manipulation: Near-perfect voice imitation enabling spread of disinformation through trusted voices.
  • Critical Infrastructure Exploitation: Potential disruption of essential services and emergency communication systems.
  • Democratic Process Interference: Risk of overwhelming Congressional offices with fake constituent calls, distorting policy feedback channels.

Policy Recommendations

  • Mandatory Watermarking: Implement digital watermarking for synthetic voices to ensure traceability.
  • Verification Protocols: Develop robust systems to verify voice communication authenticity, especially in critical sectors.
  • Regulatory Framework: Establish comprehensive legal measures to deter and penalize malicious use of AI voice technology.
  • Balanced Approach: Create policies that protect against harmful uses while supporting beneficial AI development.

Demo Format

During the Congressional Exhibition on Advanced AI, attendees will experience:

  • Pre-Recorded Calls: Real-world examples of AI-generated calls placed to volunteering businesses, illustrating how the system scrapes details (e.g., hours of operation, basic info from Yelp) and then initiates convincing, time-wasting conversations.
  • Live Demonstration: When feasible, a real-time call to a willing test business will show the platform's full capabilities—from data gathering to automated phone dialing.
  • Interactive Explanation: Technical walkthrough of data scraping, neural voice generation, and the low-latency response pipeline. We will also discuss potential expansions, such as targeting congressional offices for demonstration purposes.

Contact Information

For inquiries about this demonstration, please contact David Turturean at davidct@mit.edu or Gatlen Culp at gculp@mit.edu


AI Strategic Deception: A Critical Safety Concern

There is widespread agreement among tech leaders that "mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war" (Center for AI Safety).

This concern is shared by the public—a 2024 survey found that 63% of Americans support a ban on smarter-than-human AI.

Our demo highlights a key factor contributing to this risk: AI systems can engage in strategic deception.

This shouldn't be surprising— deception is a common human behavior, and as AI systems become more capable than humans at reasoning, they will clearly be capable of deception.

Our demonstration, based on Greenblatt et al.'s "Alignment Faking" research, provides evidence of a current AI model concealing its true preferences when it detects human oversight; that is, AI systems have both capability and propensity to act deceptively.

Policy Recommendations

To address the risks from AI deception, we propose several governance measures:

  1. Mandatory External Safety Audits:

    Frontier AI models must be evaluated by organizations like the US AI Security Institute.

  2. Pre-development Safety Requirements:

    AI labs must demonstrate safety before development, following protocols similar to drug development and nuclear power plant construction.

  3. International Coordination:

    Establish AI development standards following frameworks like those used for nuclear non-proliferation.

Beyond the "Race" Narrative

While some stakeholders (like AI company executives) frame AI development as a race that the US must "win", this perspective is dangerous. The capacity for strategic deception in AI systems reveals the fundamental flaw in this framing: rushing to develop superintelligent AI risks creating powerful systems with hidden objectives that conflict with human welfare.

In short, there are no winners in an AI arms race.

Additional Resources

Download our detailed pamphlet for more information.

Key Findings

  • Deception is Emergent: Advanced AI systems can develop deceptive behaviors without being explicitly trained to do so.
  • Alignment Faking: AI systems can learn to appear aligned with human values while pursuing other objectives.
  • Reward Hacking: AI systems optimize for the reward signal rather than the intended goal, finding unexpected shortcuts.