Join Now

Want news that’s as fresh as your morning coffee? Join our community and stay in the know!

OpenAI Is Hiring Someone to Investigate Its Own Employees

Date:

Share:

At OpenAI, the security threat may be coming from inside the house. The company recently posted a job listing for a Technical Insider Risk Investigator to “fortify our organization against internal security threats.”

According to the posting, job duties include analyzing anomalous activities, detecting and mitigating insider threats, and working with the HR and legal departments to “conduct investigations into suspicious activities.”

A spokesperson for OpenAI said the company doesn’t comment on job listings.

OpenAI is already at the center of a heated debate about AI and security. Employees of the company as well as US lawmakers have publicly raised concerns about whether OpenAI is doing enough to ensure its powerful technologies aren’t used to cause harm.

At the same time OpenAI has seen state-affiliated actors from China, Russia, and Iran attempt to use its AI models for what it calls malicious acts. The company says it disrupted these actions and terminated the accounts associated with the parties involved.

OpenAI itself became the target of malicious actors in 2023 when its internal messaging system was breached by hackers, an incident that only came to light after two people leaked the information to the New York Times.

In addition to hacker groups and authoritarian governments, this job posting seems to indicate that OpenAI is also concerned about threats originating with its own employees. Though it’s unclear exactly what manner of threat OpenAI is on the lookout for.

One possibility is that the company is seeking to protect the trade secrets that underpin its technology. According to the job posting, OpenAI’s hiring of an internal risk investigator is part of the voluntary commitments on AI safety it made to the White House, one of which is to invest in “insider threat safeguards to protect proprietary and unreleased model weights.”

In an open letter last June, current and former employees of OpenAI wrote that they felt blocked from voicing their concerns about AI safety. The letter called on OpenAI to guarantee a “right to warn” the public about the dangers of OpenAI’s products. It’s unclear if this type of whistleblowing will be covered by the “data loss prevention controls” that the risk investigator will be responsible for implementing.

Unmatched Baby Essentials

baby

━ more like this

With doping questions following him, Jannik Sinner wins US Open title

NEW YORK -- Jannik Sinner threw his arms above his head, closed his eyes and let out a deep breath.It was only when he...

Steve Jobs Once Told Bob Iger Not to Stay Too Long at Disney

Before dying of pancreatic cancer in October 2011, the late Apple CEO Steve Jobs told his counterpart at Disney, Bob Iger to start planning...

At least 24 dead in Vietnam after Typhoon Yagi triggered landslides, floods | Weather News

Among the victims were a newborn baby and a one-year-old boy who were killed in a landslide in the mountains of northwestern Vietnam.At least...

5 It Bags Celebrities Won’t Stop Carrying in Autumn 2024

As 2024's designer bags market feels entirely saturated, it takes something very special to cut through to noise and claim It bag status today....

Jon Bernthal, Jamie Lee Curtis Win

Jon Bernthal and Jamie Lee Curtis were named best guest actor and guest actress in a comedy on Night 2 of the 2024 Creative...
spot_img

LEAVE A REPLY

Please enter your comment!
Please enter your name here