Meta announced on Thursday that it was developing new tools to safeguard teenage users on its Instagram platform against “sextortion” schemes, which have been accused by US legislators of harming young people’s mental health.
Gangs carry out sextortion scams by convincing people to offer explicit photos of themselves and then threatening to reveal them to the public unless they pay money.
Meta said it was exploring an AI-driven “nudity protection” function that would detect and blur photographs of nudity transmitted to minors via the app’s messaging system.
The US company said it would also offer advice and safety tips to anyone sending or receiving such messages.
According to authorities in the United States, over 3,000 young people were victims of exploitation schemes in 2022.
Separately, more than 40 US states began suing Meta in October, accusing the firm of profiting on children’s pain.
According to the court petition, Meta exploited teenage users by developing a business strategy that maximized the amount of time they spent on the platform, despite the health risks.
Meta announced in January it would roll out measures to protect under-18s that included tightening content restrictions and boosting parental supervision tools.
The firm said on Thursday that the latest tools were building on “our long-standing work to help protect young people from unwanted or potentially harmful contact”.
It added that the “nudity protection” tool used “on-device machine learning”, a kind of Artificial Intelligence, to analyse images.
The firm, which is also constantly accused of violating the data privacy of its users, stressed that it would not have access to the images unless users reported them.
Meta said it would also use AI tools to identify accounts sending offending material and severely restrict their ability to interact with young users on the platform.
Meta announced on Thursday that it was developing new tools to safeguard teenage users on its Instagram platform against “sextortion” schemes, which have been accused by US legislators of harming young people’s mental health.
Gangs carry out sextortion scams by convincing people to offer explicit photos of themselves and then threatening to reveal them to the public unless they pay money.
Meta said it was exploring an AI-driven “nudity protection” function that would detect and blur photographs of nudity transmitted to minors via the app’s messaging system.
The US company said it would also offer advice and safety tips to anyone sending or receiving such messages.
According to authorities in the United States, over 3,000 young people were victims of exploitation schemes in 2022.
Separately, more than 40 US states began suing Meta in October, accusing the firm of profiting on children’s pain.
According to the court petition, Meta exploited teenage users by developing a business strategy that maximized the amount of time they spent on the platform, despite the health risks.
Meta announced in January it would roll out measures to protect under-18s that included tightening content restrictions and boosting parental supervision tools.
The firm said on Thursday that the latest tools were building on “our long-standing work to help protect young people from unwanted or potentially harmful contact”.
It added that the “nudity protection” tool used “on-device machine learning”, a kind of Artificial Intelligence, to analyse images.
The firm, which is also constantly accused of violating the data privacy of its users, stressed that it would not have access to the images unless users reported them.
Meta said it would also use AI tools to identify accounts sending offending material and severely restrict their ability to interact with young users on the platform.
Meta announced on Thursday that it was developing new tools to safeguard teenage users on its Instagram platform against “sextortion” schemes, which have been accused by US legislators of harming young people’s mental health.
Gangs carry out sextortion scams by convincing people to offer explicit photos of themselves and then threatening to reveal them to the public unless they pay money.
Meta said it was exploring an AI-driven “nudity protection” function that would detect and blur photographs of nudity transmitted to minors via the app’s messaging system.
The US company said it would also offer advice and safety tips to anyone sending or receiving such messages.
According to authorities in the United States, over 3,000 young people were victims of exploitation schemes in 2022.
Separately, more than 40 US states began suing Meta in October, accusing the firm of profiting on children’s pain.
According to the court petition, Meta exploited teenage users by developing a business strategy that maximized the amount of time they spent on the platform, despite the health risks.
Meta announced in January it would roll out measures to protect under-18s that included tightening content restrictions and boosting parental supervision tools.
The firm said on Thursday that the latest tools were building on “our long-standing work to help protect young people from unwanted or potentially harmful contact”.
It added that the “nudity protection” tool used “on-device machine learning”, a kind of Artificial Intelligence, to analyse images.
The firm, which is also constantly accused of violating the data privacy of its users, stressed that it would not have access to the images unless users reported them.
Meta said it would also use AI tools to identify accounts sending offending material and severely restrict their ability to interact with young users on the platform.
Meta announced on Thursday that it was developing new tools to safeguard teenage users on its Instagram platform against “sextortion” schemes, which have been accused by US legislators of harming young people’s mental health.
Gangs carry out sextortion scams by convincing people to offer explicit photos of themselves and then threatening to reveal them to the public unless they pay money.
Meta said it was exploring an AI-driven “nudity protection” function that would detect and blur photographs of nudity transmitted to minors via the app’s messaging system.
The US company said it would also offer advice and safety tips to anyone sending or receiving such messages.
According to authorities in the United States, over 3,000 young people were victims of exploitation schemes in 2022.
Separately, more than 40 US states began suing Meta in October, accusing the firm of profiting on children’s pain.
According to the court petition, Meta exploited teenage users by developing a business strategy that maximized the amount of time they spent on the platform, despite the health risks.
Meta announced in January it would roll out measures to protect under-18s that included tightening content restrictions and boosting parental supervision tools.
The firm said on Thursday that the latest tools were building on “our long-standing work to help protect young people from unwanted or potentially harmful contact”.
It added that the “nudity protection” tool used “on-device machine learning”, a kind of Artificial Intelligence, to analyse images.
The firm, which is also constantly accused of violating the data privacy of its users, stressed that it would not have access to the images unless users reported them.
Meta said it would also use AI tools to identify accounts sending offending material and severely restrict their ability to interact with young users on the platform.
Meta announced on Thursday that it was developing new tools to safeguard teenage users on its Instagram platform against “sextortion” schemes, which have been accused by US legislators of harming young people’s mental health.
Gangs carry out sextortion scams by convincing people to offer explicit photos of themselves and then threatening to reveal them to the public unless they pay money.
Meta said it was exploring an AI-driven “nudity protection” function that would detect and blur photographs of nudity transmitted to minors via the app’s messaging system.
The US company said it would also offer advice and safety tips to anyone sending or receiving such messages.
According to authorities in the United States, over 3,000 young people were victims of exploitation schemes in 2022.
Separately, more than 40 US states began suing Meta in October, accusing the firm of profiting on children’s pain.
According to the court petition, Meta exploited teenage users by developing a business strategy that maximized the amount of time they spent on the platform, despite the health risks.
Meta announced in January it would roll out measures to protect under-18s that included tightening content restrictions and boosting parental supervision tools.
The firm said on Thursday that the latest tools were building on “our long-standing work to help protect young people from unwanted or potentially harmful contact”.
It added that the “nudity protection” tool used “on-device machine learning”, a kind of Artificial Intelligence, to analyse images.
The firm, which is also constantly accused of violating the data privacy of its users, stressed that it would not have access to the images unless users reported them.
Meta said it would also use AI tools to identify accounts sending offending material and severely restrict their ability to interact with young users on the platform.
Meta announced on Thursday that it was developing new tools to safeguard teenage users on its Instagram platform against “sextortion” schemes, which have been accused by US legislators of harming young people’s mental health.
Gangs carry out sextortion scams by convincing people to offer explicit photos of themselves and then threatening to reveal them to the public unless they pay money.
Meta said it was exploring an AI-driven “nudity protection” function that would detect and blur photographs of nudity transmitted to minors via the app’s messaging system.
The US company said it would also offer advice and safety tips to anyone sending or receiving such messages.
According to authorities in the United States, over 3,000 young people were victims of exploitation schemes in 2022.
Separately, more than 40 US states began suing Meta in October, accusing the firm of profiting on children’s pain.
According to the court petition, Meta exploited teenage users by developing a business strategy that maximized the amount of time they spent on the platform, despite the health risks.
Meta announced in January it would roll out measures to protect under-18s that included tightening content restrictions and boosting parental supervision tools.
The firm said on Thursday that the latest tools were building on “our long-standing work to help protect young people from unwanted or potentially harmful contact”.
It added that the “nudity protection” tool used “on-device machine learning”, a kind of Artificial Intelligence, to analyse images.
The firm, which is also constantly accused of violating the data privacy of its users, stressed that it would not have access to the images unless users reported them.
Meta said it would also use AI tools to identify accounts sending offending material and severely restrict their ability to interact with young users on the platform.
Meta announced on Thursday that it was developing new tools to safeguard teenage users on its Instagram platform against “sextortion” schemes, which have been accused by US legislators of harming young people’s mental health.
Gangs carry out sextortion scams by convincing people to offer explicit photos of themselves and then threatening to reveal them to the public unless they pay money.
Meta said it was exploring an AI-driven “nudity protection” function that would detect and blur photographs of nudity transmitted to minors via the app’s messaging system.
The US company said it would also offer advice and safety tips to anyone sending or receiving such messages.
According to authorities in the United States, over 3,000 young people were victims of exploitation schemes in 2022.
Separately, more than 40 US states began suing Meta in October, accusing the firm of profiting on children’s pain.
According to the court petition, Meta exploited teenage users by developing a business strategy that maximized the amount of time they spent on the platform, despite the health risks.
Meta announced in January it would roll out measures to protect under-18s that included tightening content restrictions and boosting parental supervision tools.
The firm said on Thursday that the latest tools were building on “our long-standing work to help protect young people from unwanted or potentially harmful contact”.
It added that the “nudity protection” tool used “on-device machine learning”, a kind of Artificial Intelligence, to analyse images.
The firm, which is also constantly accused of violating the data privacy of its users, stressed that it would not have access to the images unless users reported them.
Meta said it would also use AI tools to identify accounts sending offending material and severely restrict their ability to interact with young users on the platform.
Meta announced on Thursday that it was developing new tools to safeguard teenage users on its Instagram platform against “sextortion” schemes, which have been accused by US legislators of harming young people’s mental health.
Gangs carry out sextortion scams by convincing people to offer explicit photos of themselves and then threatening to reveal them to the public unless they pay money.
Meta said it was exploring an AI-driven “nudity protection” function that would detect and blur photographs of nudity transmitted to minors via the app’s messaging system.
The US company said it would also offer advice and safety tips to anyone sending or receiving such messages.
According to authorities in the United States, over 3,000 young people were victims of exploitation schemes in 2022.
Separately, more than 40 US states began suing Meta in October, accusing the firm of profiting on children’s pain.
According to the court petition, Meta exploited teenage users by developing a business strategy that maximized the amount of time they spent on the platform, despite the health risks.
Meta announced in January it would roll out measures to protect under-18s that included tightening content restrictions and boosting parental supervision tools.
The firm said on Thursday that the latest tools were building on “our long-standing work to help protect young people from unwanted or potentially harmful contact”.
It added that the “nudity protection” tool used “on-device machine learning”, a kind of Artificial Intelligence, to analyse images.
The firm, which is also constantly accused of violating the data privacy of its users, stressed that it would not have access to the images unless users reported them.
Meta said it would also use AI tools to identify accounts sending offending material and severely restrict their ability to interact with young users on the platform.