Facebook to scan private chats in Messenger for signs of scams and deliver pop-up notifications to users who appear to be falling for them
- Machine learning will detect unusual behaviour and warn people of scams
- In-app notification will warn people and offer formation on how to avoid scams
- Similar feature will be used to help users spot potential imposters
Facebook Messenger is getting an update designed to stop people being duped by scam artists on the app.
Machine learning will detect telltale signs of nefarious activity and trigger in-app notifications which appear at the top of the conversation.
These will prompt the user to consider the trustworthiness of the person approaching them and provide easy-to-access information on how to spot, and avoid, scams.
Scroll down for video
Machine learning will detect telltale signs of nefarious activity and trigger in-app notifications to appear at the top of the conversation (pictured)
Automatically generated in-app notifications have been slowly rolled out to Android Messenger users from March, Facebook claims.
It is now also set to be distributed to worldwide iOS users next week.
The tool is designed to catch sophisticated scam attempts that utilise the app and will work within Messenger’s encryption, allowing it to work without the company being able to see private conversations.
Facebook claims it will work even when Messenger makes the shift to full end-to-end encryption, as is used by sister firm WhatsApp.
The initial notification will ask a user if they know the person they are talking to and is designed to appear when it detects a person may be trying to deceive another.
A new artificial intelligence that Facebook is calling the ‘universal product recognition model’ will look trawl through users’ photos to identify items so that products can be made ‘shoppable.’
The AI, which is being rolled out in tandem with a new ‘Shops’ feature will scan images uploaded by users on Facebook and use machine-learning to identify specific products – whether it’s clothing, electronics, or even cars.
For now the AI will be focused on Marketplace where it’s being used to scan photos of for-sale products uploaded by Facebook users.
‘We want to make anything and everything on the platform shoppable, whenever the experience feels right,’ Manohar Paluri, head of Applied Computer Vision at Facebook, told The Verge.
Facebook’s new AI won’t just be able to identify general products, it will be able to discern specific attributes of the products it crawls, including what the brand is, the size, or what material it’s made out of.
The tool, called GrokNet is already being use on its ‘Marketplace’ section where users are able to list items for sale.
Once an image is uploaded, GrokNet can suggest a title by examining what the product is.
Details of how this works are scarce in order to prevent criminals circumventing the feature, with Messenger only revealing machine learning will latch on to ‘behavioural signals’.
The development of in-app notifications will be used to try and protect people from talking with imposters who approach them, as well as in scam prevention.
For example, a notification will appear if a minor (under-18) is approached by an adult they may not know.
An example provided by Facebook discusses what would happen if a child is approached on the app by an account that appears to be imitating one of their actual friends.
For example, the imitation account will have the same profile picture and name.
Messenger will point this out to the user and allow them to block them if they wish.
This feature is designed to work alongside stricter child protection policies employed by Messenger, such as those which limit contact from adults they aren’t connected to.
Facebook Messenger already has a procedure in place which uses machine learning to detect and disable the accounts of adults who are engaging in inappropriate interactions with children.
‘These features show a great integration of the technical tools that will help curb bad behaviour on the platform, while also reminding people of their own control over their account,’ said Stephen Balkam, CEO of the Family Online Safety Institute.
‘It’s important to use language that empowers people to make wise decisions and think more critically about who they’re interacting with online.
‘We’re especially glad to see this reflected in the thoughtful approach around safety considerations for younger users.’
Source: Read Full Article