EE Seminar: Securing Modern Systems is More Challenging Than Ever (and Requires New Dedicated Guardrails)

30 בדצמבר 2024, 12:00 
אולם 011, בניין כיתות חשמל 
EE Seminar: Securing Modern Systems is More Challenging Than Ever (and Requires New Dedicated Guardrails)

(The talk will be given in English)

 

Speaker:     Dr. Ben Nassi

                           Research fellow in the Faculty of Electrical and Computer Engineering (ECE) at the Technion and a Board Member at Black Hat

                          

011 hall, Electrical Engineering-Kitot Building‏

Monday, December 30th, 2024

12:00 - 13:00

 

Securing Modern Systems is More Challenging Than Ever (and Requires New Dedicated Guardrails)

 

Abstract

Over the past decade, an increasing number of systems and devices have gained Internet connectivity and been enhanced with sensing capabilities and AI. While these advancements have created a world of smarter, more automated, and highly connected devices, they have also introduced significant security and privacy challenges that cannot be effectively addressed with traditional countermeasures.

In the first part of this talk, we will explore the security and privacy concerns of cyber-physical systems. Specifically, we will examine new threats that have emerged with the deployment of technologies like drones and Teslas in real-world environments. Our discussion will highlight methods for detecting intrusive drone filming and securing Teslas against time-domain adversarial attacks.

The second part of the talk focuses on the challenges posed by the coexistence of functional devices with limited computational power (that do not adhere to Moore’s law) alongside sensors with ever-increasing sampling rates. We will explore how threats such as cryptanalysis and speech eavesdropping—previously accessible only to well-resourced adversaries—can now be executed by ordinary attackers using readily available hardware like photodiodes and video cameras. These attacks leverage optical traces or video footage from a device’s power LED to extract sensitive information.

Finally, in the last part of the talk, we will address the emerging need to secure GenAI-powered applications against a new category of threats we call Promptware. This threat highlights the evolving landscape of vulnerabilities introduced by generative AI systems.

Short Bio

Bio. Dr. Ben Nassi is a research fellow in the Faculty of Electrical and Computer Engineering (ECE) at the Technion and a Board Member at Black Hat.

Ben investigates the security and privacy of systems and devices. He has introduced innovative side-channel attacks to recover speech from light emitted by light bulbs and to extract cryptographic keys from a device’s power LED using video footage. In the realm of cyber-physical systems, he developed techniques to secure Tesla vehicles against time-domain adversarial attacks and to detect intrusive video filming conducted by drones. Recently, his research has expanded to AI security, where he proposed methods to protect GenAI-powered applications from AI worms and to safeguard autonomous vehicle perception against emergency vehicle lighting attacks.

His work has been published in leading academic venues such as USENIX Security, IEEE S&P, and CCS, as well as prestigious industrial conferences, including Black Hat, DEFCON, and the RSA Conference. His research has garnered significant media attention, with features in Forbes, Fox News, Wired, Ars Technica and other major outlets.

Ben earned his PhD from Ben-Gurion University, focusing on “Security and Privacy in the IoT Era,” and completed his postdoctoral fellowship at Cornell Tech. His accomplishments include winning the 2023 Pwnie Award for Best Cryptographic Attack and the Dean’s Award for Excellence in PhD Studies.

 

השתתפות בסמינר תיתן קרדיט שמיעה לתלמידי תואר שני ושלישי = עפ"י רישום שם מלא + מספר ת.ז. בטופס הנוכחות שיועבר באולם במהלך הסמינר

 

 

 

 


 
אוניברסיטת תל אביב עושה כל מאמץ לכבד זכויות יוצרים. אם בבעלותך זכויות יוצרים בתכנים שנמצאים פה ו/או השימוש שנעשה בתכנים אלה לדעתך מפר זכויות
שנעשה בתכנים אלה לדעתך מפר זכויות נא לפנות בהקדם לכתובת שכאן >>