Data on what you buy, how, and where is secretly fed into AI-powered verification services, according to the Wall Street Journal. These are supposed to help companies guard against credit-card and other forms of fraud.
More than 16,000 signals are analyzed by a service called Sift, which generates a "Sift score," used to flag devices, credit cards and accounts that a vendor may want to block based on a person or entity's overall "trustworthiness" score. From the Sift website: "Each time we get an event -- be it a page view or an API event -- we extract features related to those events and compute the Sift Score. These features are then weighed based on fraud we've seen both on your site and within our global network, and determine a user's Score. There are features that can negatively impact a Score as well as ones which have a positive impact."
The system is similar to a credit score except there's no way to find out your own Sift score. This sounds a lot like the data that China's social credit system, in part, uses. In the PRC, a person's social score can vary depending on their behavior. The exact methodology is a secret — but examples of infractions include bad driving, smoking in non-smoking zones, buying too many video games and posting fake news online. While Edward Snowden certainly demonstrated the global extent of the US surveillance state, corporate entities have not implemented anything on the level of the Chinese social scoring system. Yet.
No comments:
Post a Comment