What Happens When AI Is Asked to Create a Bomb? Study Reveals LLMs' Susceptibility to 'Jailbreaks' A recent study sheds light on a concerning issue: Large Language Models (LLMs) susceptibility to "jailbreaks." by Jace Dela Cruz