Close Menu
    Facebook X (Twitter) Instagram
    Trending
    • Maps: See Where Israeli Strikes Damaged Iranian Nuclear and Military Facilities
    • Dealmakers fear Trump has set precedent with ‘golden share’ in US Steel
    • NEW: Poll Finds 60% of Americans, Majority of Republicans Oppose US Involvement in War with Iran | The Gateway Pundit
    • Brazil sells rights to oil blocks near Amazon river mouth
    • Meta ‘concerned’ Iran could ban WhatsApp after snooping claims | Technology News
    • Panthers’ back-to-back Cups a masterclass in roster building
    • UK inflation was 3.4% in May
    • What a Pleasant Surprise: Arnold Schwarzenegger Terminates Joy Behar with a Beautiful Response After She Tries to Bait Him into Attacking Trump and ICE (VIDEO) | The Gateway Pundit
    News Study
    Wednesday, June 18
    • Home
    • World News
    • Latest News
    • Sports
    • Politics
    • Tech News
    • World Economy
    • More
      • Trending News
      • Entertainment News
      • Travel
    News Study
    Home»Tech News

    OpenAI, GoogleDeepMind, and Meta Get Bad Grades on AI Safety

    Team_NewsStudyBy Team_NewsStudyDecember 13, 2024 Tech News No Comments5 Mins Read
    Share
    Facebook Twitter LinkedIn Pinterest Email

    The just-released AI Safety Index graded six main AI firms on their danger evaluation efforts and security procedures… and the highest of sophistication was Anthropic, with an total rating of C. The opposite 5 firms—Google DeepMind, Meta, OpenAI, xAI, and Zhipu AI—acquired grades of D+ or decrease, with Meta flat out failing.

    “The aim of this isn’t to disgrace anyone,” says Max Tegmark, an MIT physics professor and president of the Future of Life Institute, which put out the report. “It’s to supply incentives for firms to enhance.” He hopes that firm executives will view the index like universities view the U.S. Information and World Stories rankings: They could not get pleasure from being graded, but when the grades are on the market and getting consideration, they’ll really feel pushed to do higher subsequent 12 months.

    He additionally hopes to assist researchers working in these firms’ security groups. If an organization isn’t feeling exterior strain to fulfill security requirements, Tegmark says,“then different individuals within the firm will simply view you as a nuisance, somebody who’s making an attempt to sluggish issues down and throw gravel within the equipment.” But when these security researchers are instantly answerable for bettering the corporate’s repute, they’ll get assets, respect, and affect.

    The Way forward for Life Institute is a nonprofit devoted to serving to humanity push back actually unhealthy outcomes from highly effective applied sciences, and lately it has targeted on AI. In 2023, the group put out what got here to be often called “the pause letter,” which referred to as on AI labs to pause development of superior fashions for six months, and to make use of that point to develop security requirements. Huge names like Elon Musk and Steve Wozniak signed the letter (and so far, a complete of 33,707 have signed), however the firms didn’t pause.

    This new report may additionally be ignored by the businesses in query. IEEE Spectrum reached out to all the businesses for remark, however solely Google DeepMind responded, offering the next assertion: “Whereas the index incorporates a few of Google DeepMind’s AI security efforts, and displays industry-adopted benchmarks, our complete method to AI security extends past what’s captured. We stay dedicated to constantly evolving our security measures alongside our technological developments.”

    How the AI Security Index graded the businesses

    The Index graded the businesses on how properly they’re doing in six classes: danger evaluation, present harms, security frameworks, existential security technique, governance and accountability, and transparency and communication. It drew on publicly out there info, together with associated analysis papers, coverage paperwork, information articles, and {industry} reviews. The reviewers additionally despatched a questionnaire to every firm, however solely xAI and the Chinese language firm Zhipu AI (which at the moment has probably the most succesful Chinese language-language LLM) crammed theirs out, boosting these two firms’ scores for transparency.

    The grades got by seven impartial reviewers, together with huge names like UC Berkeley professor Stuart Russell and Turing Award winner Yoshua Bengio, who’ve mentioned that superintelligent AI may pose an existential risk to humanity. The reviewers additionally included AI leaders who’ve targeted on near-term harms of AI like algorithmic bias and poisonous language, resembling Carnegie Mellon College’s Atoosa Kasirzadeh and Sneha Revanur, the founding father of Encode Justice.

    And total, the reviewers weren’t impressed. “The findings of the AI Security Index challenge recommend that though there may be numerous exercise at AI firms that goes underneath the heading of ‘security,’ it isn’t but very efficient,” says Russell.“Particularly, none of the present exercise gives any sort of quantitative assure of security; nor does it appear doable to supply such ensures given the present method to AI through big black bins skilled on unimaginably huge portions of information. And it’s solely going to get tougher as these AI programs get greater. In different phrases, it’s doable that the present expertise course can by no means help the required security ensures, by which case it’s actually a lifeless finish.”

    Anthropic acquired the perfect scores total and the perfect particular rating, getting the one B- for its work on present harms. The report notes that Anthropic’s fashions have acquired the best scores on main security benchmarks. The corporate additionally has a “responsible scaling policy“ mandating that the corporate will assess its fashions for his or her potential to trigger catastrophic harms, and won’t deploy fashions that the corporate judges too dangerous.

    All six firms scaled notably badly on their existential safety methods. The reviewers famous that the entire firms have declared their intention to construct artificial general intelligence (AGI), however solely Anthropic, Google DeepMind, and OpenAI have articulated any sort of technique for guaranteeing that the AGI stays aligned with human values. “The reality is, no person is aware of how one can management a brand new species that’s a lot smarter than us,” Tegmark says. “The assessment panel felt that even the [companies] that had some form of early-stage methods, they weren’t satisfactory.”

    Whereas the report doesn’t problem any suggestions for both AI firms or policymakers, Tegmark feels strongly that its findings present a transparent want for regulatory oversight—a authorities entity equal to the U.S. Meals and Drug Administration that will approve AI merchandise earlier than they attain the market.

    “I really feel that the leaders of those firms are trapped in a race to the underside that none of them can get out of, irrespective of how kind-hearted they’re,” Tegmark says. Right now, he says, firms are unwilling to decelerate for security checks as a result of they don’t need rivals to beat them to the market. “Whereas if there are security requirements, then as an alternative there’s business strain to see who can meet the security requirements first, as a result of then they get to promote first and earn a living first.”

    From Your Web site Articles

    Associated Articles Across the Internet



    Source link

    Team_NewsStudy
    • Website

    Keep Reading

    Amazon boss says AI will replace jobs at tech giant

    Donald Trump to extend US TikTok ban deadline, White House says

    AI Engineer Overcomes Multiple Hurdles

    Apache Airflow: From Stagnation to Millions of Downloads

    Autonomous Planes: Will Pilots Become Relics of the Past?

    How JPEG Became the Internet’s Image Standard

    Add A Comment
    Leave A Reply Cancel Reply

    Editors Picks

    Maps: See Where Israeli Strikes Damaged Iranian Nuclear and Military Facilities

    June 18, 2025

    Dealmakers fear Trump has set precedent with ‘golden share’ in US Steel

    June 18, 2025

    NEW: Poll Finds 60% of Americans, Majority of Republicans Oppose US Involvement in War with Iran | The Gateway Pundit

    June 18, 2025

    Brazil sells rights to oil blocks near Amazon river mouth

    June 18, 2025

    Meta ‘concerned’ Iran could ban WhatsApp after snooping claims | Technology News

    June 18, 2025
    Categories
    • Entertainment News
    • Latest News
    • Politics
    • Sports
    • Tech News
    • Travel
    • Trending News
    • World Economy
    • World News
    About us

    Welcome to NewsStudy.xyz – your go-to source for comprehensive and up-to-date news coverage from around the globe. Our mission is to provide our readers with insightful, reliable, and engaging content on a wide range of topics, ensuring you stay informed about the world around you.

    Stay updated with the latest happenings from every corner of the globe. From international politics to global crises, we bring you in-depth analysis and factual reporting.

    At NewsStudy.xyz, we are committed to delivering high-quality content that matters to you. Our team of dedicated writers and journalists work tirelessly to ensure that you receive the most accurate and engaging news coverage. Join us in our journey to stay informed, inspired, and connected.

    Editors Picks

    Police appeal over missing mother and son eight years after they went to France

    October 24, 2024

    Debunking American exceptionalism

    January 12, 2025

    Trump issues order to block state climate change policies

    April 9, 2025

    America’s past is prologue — even for Trump

    May 9, 2025
    Categories
    • Entertainment News
    • Latest News
    • Politics
    • Sports
    • Tech News
    • Travel
    • Trending News
    • World Economy
    • World News
    • Privacy Policy
    • Disclaimer
    • Terms & Conditions
    • About us
    • Contact us
    Copyright © 2024 Newsstudy.xyz All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.