The US Federal Trade Commission (FTC) has released a new report advising businesses what considerations they need to take into account when working with big data, with the regulator particularly urging firms to avoid strategies that produce exclusionary or discriminatory outcomes.

In the document released last week (January 6th), the body looked specifically at the end of the big data lifecycle – how information is used after it has been collected and analysed. It noted that while the technology has a wide range of benefits, it can also lead to reduced opportunities for certain groups and the targeting of vulnerable consumers for fraud and higher prices.

FTC chairwoman Edith Ramirez said: "Big data's role is growing in nearly every area of business, affecting millions of consumers in concrete ways. The potential benefits to consumers are significant, but businesses must ensure that their big data use does not lead to harmful exclusion or discrimination."

The report observed that positive outcomes of big data are not only limited to improved results at enterprises. There are also a number of wider societal benefits that can be seen, such as improved healthcare, education and equality. 

For example, Google is among a number of businesses that are deploying big data solutions as part of their hiring processes, in order to create a more diverse workforce. "Through analytics, Google identified issues with its hiring process, which included an emphasis on academic grade point averages and 'brainteaser' questions during interviews," the report said. 

"Google then modified its interview practices and began asking more structured behavioural questions (e.g., how would you handle the following situation?). This new approach helped ensure that potential interviewer biases had less effect on hiring decisions."

However, while big data can be used to eliminate many personal biases from business' decision-making, the technology could introduce new types of discrimination that could affect opportunities for citizens if used incorrectly.

One example highlighted by the FTC's report is incidents where credit card providers have lowered certain customers' credit limits based not on their own payment history, but analysis of other consumers with a poor repayment history.

In one case, a provider settled with the FTC after it was alleged to have failed to disclose that it identified some customers as having a higher credit risk if they used their cards to pay for  marriage counseling, therapy, or tyre repair services, based on its experiences with other consumers and their repayment histories.

"Using this type of a statistical model might reduce the cost of credit for some individuals, but may also result in some creditworthy consumers being denied or charged more for credit than they might otherwise have been charged," the FTC said.

Other potential issues for big data included its ability to expose sensitive information. The FTC highlighted one study that combined data from Facebook 'Likes' and limited survey information, which was able to predict a user's ethnic origin with 95 per cent accuracy and male users' sexual orientation 88 per cent of the time.

The report offered several recommendations to ensure that the use of big data does not lead to discrimination. These include reviewing data sets and algorithms to ensure that hidden biases are not having an unintended impact on certain populations.

"Remember that just because big data found a correlation, it does not necessarily mean that the correlation is meaningful," it concluded. "As such, you should balance the risks of using those results, especially where your policies could negatively affect certain populations."