AI’s Greatest Risk: Corporate Control, Not Consciousness, Says Researcher Meredith Whittaker
May 13, 2023
AI Pioneer Geoffrey Hinton, widely known as “the Godfather of AI,” recently made headlines by resigning from Google and issuing a warning that AI could soon surpass human intelligence and potentially pose a threat to humanity.
However, according to Meredith Whittaker, a prominent AI researcher who faced backlash from Google in 2019 for organizing employee opposition to the company’s military drone technology project, Hinton’s concerns are misguided.
As the President of the Signal Foundation, Whittaker explains to Fast Company why Hinton’s alarmist views divert attention from more immediate dangers and emphasizes how workers can combat the negative impacts of technology from within.
Syrus Today took Meredith Whittaker’s interview with Fast Company as its source of information. The interview has been reworked in text form for ease of reading.
AI Pioneer Geoffrey Hinton’s Warnings Misguided, Says Researcher Meredith Whittaker
Prominent AI researcher emphasizes the need to address immediate threats and corporate control rather than hypothetical concerns.
In a recent interview with Fast Company, prominent AI researcher Meredith Whittaker responded to the media tour of AI pioneer Geoffrey Hinton. Who recently resigned from Google while warning about the potential dangers of AI surpassing human intelligence.
Whittaker, known for her previous activism against Google’s involvement in military drone projects. She highlights the importance of focusing on more pressing threats and challenges the effectiveness of Hinton’s alarmist views.
Whittaker expresses disappointment over Hinton’s late-stage concerns, noting that he failed to support those who took real risks earlier in their careers to address the dangerous implications of AI controlled by corporations. While Hinton’s warnings may attract attention, Whittaker argues that they divert focus from the immediate harms occurring today.
Whittaker’s Experience: Lack of Support and Disparity in AI Activism
The interview sheds light on Whittaker’s own experience at Google. Where she organized opposition against Project Maven, a military drone technology project. Reflecting on her efforts, Whittaker anticipated the possibility of being pushed out due to the financial implications of challenging such projects.
Whittaker also discusses the lack of support from Hinton during her organizing efforts. She emphasizes that his absence during rallies and actions undermined the effectiveness of raising concerns. She points out that the ability to raise concerns safely is essential. Seh also claims that silence from influential figures like Hinton endorses an environment that suppresses dissenting voices.
Furthermore, Whittaker highlights a pattern in which women, especially women of color, have faced repercussions for speaking out against AI-related issues. This pattern extends beyond Google and demonstrates the need for greater recognition of marginalized voices in discussions surrounding artificial intelligence.
Regarding Hinton’s dismissal of the concerns raised by former Google researcher Timnit Gebru. Who was fired for refusing to withdraw a paper on AI’s harms to marginalized communities, Whittaker finds his statement stunning. She argues that the existing harms caused by AI disproportionately affect historically marginalized groups and are indeed existential for them.
Whittaker suggests that Hinton’s dismissal of these concerns reflects a self-interest rooted in maintaining power and business interests rather than addressing real-world consequences.
Whittaker Debunks AI Consciousness Threat and Urges Action Against Corporate Control
When asked about the possibility of artificial intelligence gaining consciousness and posing a threat to humanity, Whittaker dispels such notions. She stresses the absence of evidence to support the idea that current models of artificial intelligence possess consciousness.
She emphasizes that AI systems remain under the control of a few corporations and can be rendered ineffective through various means, such as power outages, environmental factors, or resistance from workers.
Whittaker warns against falling into a trance-like state. A state in which humans engage with AI systems as if they were human interlocutors. It then diverts attention from real issues such as climate change and the damage inflicted on marginalized communities.
The interview concludes with Whittaker stressing the importance of recognizing the concentrated control of AI technologies by corporations. By understanding the interests of these corporations, Whittaker believes it is possible to resist and prevent the actual harms occurring today.
As the tech industry grapples with the future implications of AI, Whittaker’s perspective offers a critical examination of Hinton’s warnings, directing attention towards immediate challenges and the need to address corporate control in shaping the future of AI.