Why OpenAI Needs To Be Singled Out in the Troubled Tech Valley 

AI models pretending to be humans come at the expense of everything you search, read or click on the internet. Whether it’s your Instagram photos, conversations with chatbots like Bing or emails, they all produce a trove of personal data for the big tech. Even though Silicon Valley has been feeding on the public’s data, fingers have been particularly raised at the celebrity — OpenAI — for its closed-door activities.

The Californian AI research lab released a 98-page technical report about GPT-4 earlier this year which was deemed not transparent or “open” in any meaningful way. The company run by Sam Altman did not disclose any details about the architecture (including model size), hardware, the magnitude of its dataset, or the training method, making it the most secretive release so far.

Emily M. Bender, a professor of Linguistics at the University of Washington, tweeted that this secrecy did not come as a surprise to her. “They are willfully ignoring the most basic risk mitigation strategies, all while proclaiming themselves to be working towards the benefit of humanity,” she tweeted.

Keeping it so hush-hush has made it unfair to people who are now finding it difficult to know whether their work has been scrapped. Moreover, it has become an impossible task to prove in court to prove their intellectual property and copyright – giving OpenAI a legal yet unfair advantage.

The paper received enormous backlash claiming that the company did not want to reveal the details to maintain its dominant position in the market. OpenAI is at fault — so are the rest of the company building these AI models to mimic the way humans work, play and create. We have just been taking these companies’ word for it — as a result, privacy is a mess.

Safe, private, and secure: theoretically

After some initial struggles, OpenAI got away from being outcast from Italy by providing limited controls. Even Google’s writing-helper Smart Compose, trained on the public’s Gmail data, is off by default as per European Union’s laws. Tech companies often have minute legal struggles now and then but eventually dodge the bullet either by compensating with penalties, finding a legal loophole, or tweaking the company’s guidelines here and there.

In less than a decade tech companies including Google, Apple, Meta, Apple, and Amazon have collectively received penalties of over $30 billion. Fines are not just a ‘cost of doing business’ for tech giants, the president of the French Competition Authority Isabelle da Silva declared publicly. “Fines are an element of the identification of what is wrong in the conduct,” Silva said.

A recent exposé by Geoffrey A. Fowler of The Washington Post, poses a thought-provoking question: “Which data of ours is and isn’t off limits? The investigative piece takes a deep dive at how the Valley companies are using your data and there’s not much you can do about it. He further notes much of the answer is wrapped up in lawsuits, investigations and hopefully some new laws. But meanwhile, big tech is making up its own rules.”

Drawing a line

The debate around tech companies and their Orwellian mass collection of data has been going on since the beginning of time. Their refusal to budge from these practices has often caused outrage, resulting in authorities (finally) stepping in.

Mozilla has launched a campaign calling on the software giant to come clean. “If nine experts in privacy can’t understand what Microsoft does with your data, what chance does the average person have?” the announcement note stated. As a part of the campaign 4 lawyers, 3 privacy experts, and 2 campaigners looked at software giant Microsoft’s updated Service Agreement, which will go into effect on 30 September. Surprisingly, none of the experts could tell if Microsoft plans on using your personal data.

Exactly a year ago, the Federal Trade Commission (FTC) announced an initiative to draft rules to crack down on what it considers to be “harmful commercial surveillance” or “the business of collecting, analyzing and profiting from information about people.” There has been no update on the case since then.

The tech companies are on a thin line between ‘making products better’ and theft. On a darker note, these AI companies have had a constant tussle with in-house and otherwise ethicists globally.

Even though OpenAI started as a non-profit champion it has become a part of the money-making circus in the Bay Area. The company’s darling ChatGPT has given enough reasons for artists and authors to drag the startup to court. But its tight-lipped approach has managed to give it leverage above the others. Ironically, Jack Clark, OpenAI’s former policy director OpenAI, said that rather than act like it isn’t there, “it’s better to talk about AI’s dangers before they arrive” when GPT-2 was released in 2019.

The post Why OpenAI Needs To Be Singled Out in the Troubled Tech Valley appeared first on Analytics India Magazine.

Follow us on Twitter, Facebook
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 comments
Oldest
New Most Voted
Inline Feedbacks
View all comments

Latest stories

You might also like...