Skip to main content

The SEC Newgate AI Weekly

AI Concept
By Abbey Crawford
01 February 2024
Digital and Insight
artificial intelligence
ai
News

Welcome to this weeks’ AI briefing, bringing you the most interesting developments in AI over the past seven days and what they could mean for you and your business.  

2024 Showdown: States rapidly respond with AI laws ahead of election 

As the 2024 election cycle gains momentum, lawmakers in at least 14 states have swiftly introduced legislation to address the challenges posed by artificial intelligence and deepfakes in political campaigns. In the first three weeks of the year, both major parties across various states have proposed bills falling into two main categories: disclosure requirements and bans. The urgency comes after a fake robocall impersonating President Biden in New Hampshire highlighted the potential chaos caused by political deepfakes. These new bills aim to ensure transparency in AI-generated content influencing elections, with some states considering bans within specific time frames. However, their enactment remains uncertain, as evidenced by the limited progress of similar bills in the previous year. 

From Taylor Swift to teachers: AI's impact unleashed 

The recent spread of manipulated explicit images of Taylor Swift has highlighted the growing threat of AI in creating convincing yet fake and damaging content. While the misuse of AI to manipulate images is not new, the increased accessibility of AI tools has exacerbated the problem. This impact extends beyond celebrities, and is now reaching everyday individuals such as students, nurses, and teachers who have become targets. The incident involving Taylor Swift has drawn attention to the issue, with her fan base actively combatting the spread of fake images. However, experts warn that the lack of effective guardrails and content moderation by social media platforms allows such content to persist. The rise of AI-generated imagery poses a significant challenge, leading to calls for legislative changes and increased protection against non-consensual deepfake content. 

Tackling AI Challenges 

In a recent interview with NBC's Lester Holt, Microsoft CEO Satya Nadella discussed the potential impacts of AI. Expressing worry over the disturbing rise of nonconsensual deepfakes, citing Taylor Swift's case as "alarming and terrible”, Nadella called for industry-wide collaboration to set boundaries for AI and ensure a secure online environment. He also tackled concerns about AI-generated disinformation in elections, advocating for consensus and cooperation among political entities. Nadella sees generative AI as a tool to enhance human workflows but recognizes the need to establish protections and fair use in the swiftly evolving technological landscape. 

Navigating AI in academia 

Over 53% of UK undergraduates are leveraging artificial intelligence programs, such as Google Bard and ChatGPT, for essay assistance, with one in four relying on these applications for topic suggestions and one in eight for content creation, according to a Higher Education Policy Institute survey. While only 5% confess to directly copying unedited AI-generated text into assessments, concerns persist about students' awareness of potential inaccuracies in AI-generated content. In parallel, teachers are exploring AI to streamline workloads, as demonstrated by the Education Endowment Foundation's research project. This initiative aims to generate lesson plans, teaching materials, exams, and model answers, with the potential to cut the workload burden on teachers and enhance teaching quality. However, like many conversations these days, people are worried about the impact this might have on students and educators.