US senators drill into FTC’s work to track AI attacks on older citizens

US senators drill into FTC’s work to track AI attacks on older citizens

Spread the love

The senators asked the FTC chair four questions about AI scam data collection practices to find out if the commission can identify AI-powered scams and address them accordingly.

1594 Total views

26 Total shares

US senators drill into FTC’s work to track AI attacks on older citizens

Four United States senators have written to Federal Trade Commission (FTC) Chair Lina Khan requesting information on efforts taken by the FTC to track the use of artificial intelligence (AI) in scamming older Americans.

In the letter addressed to Khan, U.S. Senators Robert Casey, Richard Blumenthal, John Fetterman and Kirsten Gillibrand highlighted the need to respond effectively to AI-enabled fraud and deception.

Underlining the importance of understanding the extent of the threat in order to counter it, they stated:

“We ask that FTC share how it is working to gather data on the use of AI in scams and ensure it is accurately reflected in its Consumer Sentinel Network (Sentinel) database.”

Consumer Sentinel is the FTC’s investigative cyber tool used by federal, state or local law enforcement agencies, which includes reports about various scams. The senators asked the FTC chair four questions about AI scam data collection practices.

The senators wanted to know if the FTC has the capacity to identify AI-powered scams and tag them accordingly in Sentinel. Additionally, the ommission was asked if it could identify generative AI scams that went unnoticed by the victims.

The lawmakers also requested a breakdown of Sentinel’s data to identify the popularity and success rates of each type of scam. The final question asked if the FTC uses AI to process the data collected by Sentinal.

Casey is also the chairman of the Senate Special Committee on Aging, which studies issues related to older Americans.

Related: Singapore releases National AI Strategy 2.0, plans for 15,000 AI experts

On Nov. 27, The U.S., the United Kingdom, Australia and 15 other countries jointly released global guidelines to help protect artificial intelligence (AI) models from being tampered with, urging companies to make their models “secure by design.”

Exciting news! We joined forces with @NCSC and 21 international partners to develop the “Guidelines for Secure AI System Development”! This is operational collaboration in action for secure AI in the digital age: https://t.co/DimUhZGW4R#AISafety #SecureByDesign pic.twitter.com/e0sv5ACiC3

— Cybersecurity and Infrastructure Security Agency (@CISAgov) November 27, 2023

The guidelines mainly recommended maintaining a tight leash on the AI model’s infrastructure, monitoring for any tampering with models before and after release and training staff on cybersecurity risks.

However, it failed to discuss possible controls around the use of image-generating models and deep fakes or data collection methods and their use in training models.

Magazine: Real AI use cases in crypto: Crypto-based AI markets, and AI financial analysis

Related News

europe-needs-‘airbus-for-the-metaverse’-to-become-global-web4-leader
crypto-trader-sees-best-'altseason'-since-2017-as-bitcoin-price-cools
io.net-responds-to-gpu-metadata-attack