The National Institute of Standards and Technology conducted a groundbreaking study on frontier models just before Donald Trump’s second term as president—and never published the results.

  • Optional@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    6 hours ago

    “It became very difficult, even under [president Joe] Biden, to get any papers out,” says a source who was at NIST at the time. “It felt very like climate change research or cigarette research.”

    Before taking office, President Donald Trump signaled that he planned to reverse Biden’s Executive Order on AI. Trump’s administration has since steered experts away from studying issues such as algorithmic bias or fairness in AI systems. The AI Action plan released in July explicitly calls for NIST’s AI Risk Management Framework to be revised “to eliminate references to misinformation, Diversity, Equity, and Inclusion, and climate change.”

  • sp3ctr4l@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    edit-2
    8 hours ago

    Probably all you need to know is that when you see industry conferences about AI and CyberSecurity?

    Yeah, they’re not about how to use AI to improve security with neat, new heuristic detection methods, and automated response scenarios.

    They are about all the extra work you have to do, all the extra things you now need to be aware of and worried about, because AI so routinely introduces so many holes and exploits and flaws … in so many places that you normally wouldn’t think to check, because surely any person or team putting out that terrible of code would have been fired, right?

    Beyond the methods one can use to ‘trick’ AI into doing things it isn’t ‘supposed to do’… mass AI adoption by large swathes of the economy is just literally a national security threat, it fundamentally compromises the security and integrity of tech infrastructure that now undergirds basically everything.