Tech Xplore on MSN
A new method to steer AI output uncovers vulnerabilities and potential improvements
A team of researchers has found a way to steer the output of large language models by manipulating specific concepts inside ...
Here are three papers describing different side-channel attacks against LLMs. “Remote Timing Attacks on Efficient Language Model Inference“: Abstract: Scaling up language models has significantly ...
It feels like only yesterday that ChatGPT took the world by storm. Its ability to reason and give human-like responses made everyone believe that artificial intelligence is set to revolutionize our ...
The UK AI Security Institute (AISI) has partnered with the commercial security sector on a new open source framework designed to help large language model (LLM) developers improve security posture.
SACRAMENTO — The question for many schools about using large language models (LLMs) has shifted from “if” to “how,” and there are no shortage of technology vendors bidding for their attention. But for ...
Some of the world’s most widely used open-weight generative AI (GenAI) services are profoundly susceptible to so-called “multi-turn” prompt injection or jailbreaking cyber attacks, in which a ...
Security researchers uncovered a range of cyber issues targeting AI systems that users and developers should be aware of — some as demo attacks and others already a threat in the wild. The year of ...
Yi Yang (Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China; School of Cyber Security, University of Chinese Academy of Sciences, China), Jinghua Liu (Institute of ...
Researchers have published the recipe for an artificial-intelligence model that reviews the scientific literature better than some major large language models (LLMs) are able to, and gets the ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results