Jump to content

Draft:Robust Intelligence

From Wikipedia, the free encyclopedia
Robust Intelligence
Founded2019
HeadquartersSan Francisco, California, U.S.
Key people
  • Yaron Singer (CEO, Co-founder)
  • Kojin Oshiba (Co-founder)
  • Hyrum Anderson (CTO)
Number of employees
51-100
Websiterobustintelligence.com

Robust Intelligence is an artificial intelligence (AI) security company headquartered in San Francisco, California.[1] The company’s platform protects organizations from the security and safety risks of artificial intelligence models and applications.

Robust Intelligence was founded in 2019 by Dr. Yaron Singer, a Gordon McKay Professor of Computer Science and Applied Mathematics at Harvard University, and Kojin Oshiba, a machine learning researcher and Harvard University alumnus.[2][3] In September 2024, Robust Intelligence was acquired by Cisco.[4]

History

[edit]

Robust Intelligence was co-founded by Yaron Singer, a tenured professor of Computer Science and Applied Mathematics at Harvard, and Kojin Oshiba in 2019 after nearly a decade of combined robust machine learning research at the university and Google Research. Recognizing the state of artificial intelligence adoption and the chronic challenges of AI risk in industry, the pair developed the industry’s first AI firewall.

Prior to founding Robust Intelligence and his ten-year tenure at Harvard, Yaron worked as a Postdoctoral Research Scientist at Google on the Algorithms and Optimization team. This role followed receiving his PhD in computer science from University of California, Berkeley in 2011.[5]

Co-founder Kojin Oshiba graduated with a Bachelor’s Degree in Computer Science and Statistics from Harvard University in 2019. During this time, he spent a year as a machine learning engineer at QuantCo and helped co-found the company’s Japan branch.[6]

Robust Intelligence emerged from stealth mode in 2020 with the announcement of its $14 million fundraising round led by Sequoia Capital.[3] The company raised a $30 million Series B fundraising round in 2021 led by Tiger Global, with participation from Sequoia, Harpoon Venture Capital, Engineering Capital, and In-Q-Tel.[7]

Dr. Hyrum Anderson, Robust Intelligence’s Chief Technology Officer, joined the company in 2022 from Microsoft where he co-organized the AI Red Team and served as the chair of its governing board. An accomplished machine learning and cybersecurity expert, Anderson co-founded the Conference on Applied Machine Learning in Information Security (CAMLIS) and co-authored the book Not With a Bug, But With a Sticker: Attacks on Machine Learning Systems and What To Do About Them.[8]

Several notable figures and technologies in the field of artificial intelligence have emerged from research and development that began at Robust Intelligence. Most prominent are LangChain, an open source framework designed to simplify the creation of applications using LLMs, developed by former Robust Intelligence machine learning engineering leader Harrison Chase; and LlamaIndex, a data framework for connecting custom data sources to LLMs developed by Jerry Liu.[9][10]

In September 2024, Cisco acquired Robust Intelligence.[4]

Research

[edit]

Robust Intelligence researchers have identified and responsibly disclosed several adversarial machine learning techniques and AI security vulnerabilities, both while working at the company and in academia. Recent findings include:

  • Tree of Attacks with Pruning: A new, efficient jailbreak method that works across foundation models that was co-developed with researchers from Yale University.[11]
  • Llama Prompt-Guard-86M Exploit: A simple method for bypassing Meta’s detection model using only spaces between characters.[12]
  • OpenAI Structured Output Exploit: An exploit which bypasses the safety mechanisms, including refusal capabilities, of OpenAI’s Structured Mechanisms functionality.
  • OpenAI Indirect Prompt Injection: A susceptibility in LLMs to malicious instructions concealed within external sources, such as video transcripts or a provided document.[13]
  • Google Gemini Jailbreak: With algorithmic prompt improvement, researchers could bypass safety guardrails in the Google Gemini model.[14]
  • Real Attackers Don’t Compute Gradients: An exploration of the gap between adversarial machine learning research and real-world attackers.[15]
  • NVIDIA NeMo Guardrails Exploits: A jailbreak which overcomes Nvidia’s NeMo guardrails to elicit sensitive data leakage and other harmful outcomes.[16]
  • Poisoning Web-Scale Training Datasets is Practical: Research into data poisoning for models trained on large and indiscriminate datasets from the open internet.[17]

References

[edit]
  1. ^ "Tiger Global leads $30M round for AI reliability startup Robust Intelligence". SiliconANGLE. 2021-12-09. Retrieved 2024-09-30.
  2. ^ "Robust Intelligence to expand Israel activities after raising $30m". Globes. 2021-12-12. Retrieved 2024-09-30.
  3. ^ a b Cai, Kenrick. "This Harvard Professor And His Students Have Raised $14 Million To Make AI Too Smart To Be Fooled By Hackers". Forbes. Retrieved 2024-09-30.
  4. ^ a b Kovacs, Eduard (August 27, 2024). "Cisco to Acquire AI Security Firm Robust Intelligence". SecurityWeek.{{cite news}}: CS1 maint: url-status (link)
  5. ^ Tardif, Antoine (2022-03-09). "Yaron Singer, CEO at Robust Intelligence & Professor of Computer Science at Harvard University – Interview Series". Unite.AI. Retrieved 2024-09-30.
  6. ^ "Kojin Oshiba". Forbes. Retrieved 2024-09-30.
  7. ^ Lardinois, Frederic (2021-12-09). "Robust Intelligence raises $30M Series B to stress test AI models". TechCrunch. Retrieved 2024-09-30.
  8. ^ Kumar, Ram Shankar Siva; Anderson, Hyrum (2023-03-31). Not with a Bug, But with a Sticker: Attacks on Machine Learning Systems and What To Do About Them. John Wiley & Sons. ISBN 978-1-119-88399-9.
  9. ^ Palazzolo, Stephanie. "Exclusive: AI startup LangChain taps Sequoia to lead funding round at a valuation of at least $200 million". Business Insider. Retrieved 2024-09-30.
  10. ^ "LlamaIndex". Forbes. Retrieved 2024-09-30.
  11. ^ "Researchers Use AI to Jailbreak ChatGPT, Other LLMs". www.darkreading.com. Retrieved 2024-09-30.
  12. ^ Claburn, Thomas (July 29, 2024). "Meta's AI safety system defeated by the space bar". The Register.{{cite news}}: CS1 maint: url-status (link)
  13. ^ Burgess, Matt. "The Security Hole at the Heart of ChatGPT and Bing". Wired. ISSN 1059-1028. Retrieved 2024-09-30.
  14. ^ Wiggers, Kyle (2023-12-07). "Early impressions of Google's Gemini aren't great". TechCrunch. Retrieved 2024-09-30.
  15. ^ ""Real Attackers Don't Compute Gradients": Bridging the Gap Between Adversarial ML Research and Practice". Informatics@KCL. 2023-01-18. Retrieved 2024-09-30.
  16. ^ "Subscribe to read". www.ft.com. Retrieved 2024-09-30. {{cite web}}: Cite uses generic title (help)
  17. ^ "It doesn't take much to make machine-learning algorithms go awry". The Economist. ISSN 0013-0613. Retrieved 2024-09-30.