Skip to main content

The SEC Newgate AI Weekly

AI Concept
By Matt Redley
08 March 2024
Digital and Insight
artificial intelligence
ai
News

Whist concerns about AI have been overlooked in the name of progress and development, it’s clear that AI’s honeymoon is coming to an end. Against the backdrop of significant activity in AI, this past week has seen turbulence as whistleblowers sound major concerns about AI software, and tech juggernauts aim to settle scores.

Microsoft employee presses panic button on its own AI tool

In another sign that Generative AI tools are being rolled out without adequate regulatory oversight, an employee at Microsoft has sounded the alarm on its image generation tool. Shane Jones, a Microsoft principal software engineering lead, sent letters to US regulators and lawmakers warning that one of Microsoft’s AI tools, Copilot Designer, could generate harmful images, including sexualised images of women. The employee claims he repeatedly urged Microsoft to “remove Copilot Designer from public use until better safeguards could be put in place.” 

Google has also been in the firing line for its image generation tool, after its AI picture bot was criticised for being ‘woke’, which the company has scrambled to address. In the current AI arms race to be first and best, questions are getting louder about why these tools have been rolled out without adequate testing in the face of the harm they could cause. See here.

Is prompting ChatGPT a science or an art?

Dear Reader. Since Generative AI Chatbot ChatGPT was released, you may have played around with the tool and entered a variety of different prompts which have produced a range of answers. Some of ChatGPT’s answers have been downright weird and sometimes unpredictable, haven’t they? A particularly interesting blogpost published this week by Ethan Mollick puts process of getting good answers from Generative AI Chatbots under the microscope, concluding that the best way to get effective answers from Large Language Models like ChatGPT is to use a ‘Chain of Thought’ approach. Here, users should give step-by-step instructions eg. ‘First, outline the results; then produce a draft; then revise the draft; finally, produced a polished output.’ In short, context optimises results, but also,

prompting is most certainly an art rather than a science. See here.

Musk vs. Open AI showdown continues

In what will surely be turned into a Hollywood blockbuster, today’s biggest showdown in Silicon Valley continues after Elon Musk last week announced that he was suing Open AI, the creator of ChatGPT last week. Musk, a co-founder and early financer of OpenAI, has alleged a breach of contract by claiming that OpenAI has strayed from its original intent of developing artificial general intelligence (AGI) “for the benefit of humanity”, instead pursuing a for-profit model. Shooting back at Musk, OpenAI this week released emails claiming that it was Musk who was responsible for early efforts to seek a for-profit model. Musk quit the company in 2018 after a row over its future, and it’s clear the row will continue. See here. 

BBC introduces public tools to verify information

In a world when synthetic images are starting to flood the internet with Generative AI tools available to all, the need to verify content is more pressing than ever before. This is especially true in a year when 2 billion people are going to the polls, and given that deepfakes are already meddling with elections. In this context, the BBC this week announced a new feature on BBC News to show why images and videos are genuine. ‘Content credentials’ will show how BBC journalists have verified content’s authenticity, confirming where a video has come from and how the authenticity has been verified. This work is already being done by BBC News teams, but it’s the first time it will be explained to audiences. This represents a further step by the BBC in offering audiences verifiable information about the source of information, which could offer a blueprint for other publishers to follow suit. Please see here.  

Hunt focuses funding on boosting UK AI capabilities 

Jeremy Hunt’s Budget also saw the Government lean further into AI, with the Chancellor stating that the Government would embrace the tech with even greater enthusiasm, across both the private and public sector. Hunt pledged to more than double the size of their AI incubator team, ensuring that the UK Government has an in-house expertise of talented technology professionals, whilst committing to use AI to help combat fraud across the public sector, and committing funding to accelerate DWP’s digital transformation, to name a few. Since, the Health secretary has also said that the Government would explore using AI to generate patient notes to boost productivity in the NHS, as part of a £6bn funding package. Concerns over patient confidentiality will no doubt be key in any roll-out. See here.