AI bias must be tackled to avoid it 'unknowingly' harming people
Experts warn we must "think about the ethical implications" of AI bias
While AI hasn't quite come of age, it's now got to a point where most people understand what the benefits are.
However, for all the benefits on offer, companies looking to take advantage of AI must still put ethical considerations and the avoidance of bias on the priority list, according to a panel session held at Salesforce's Dreamforce conference in San Francisco this week.
"Accuracy levels are so high now that the kind of things you can do in one year were not possible years ago with hundreds of people," said Richard Socher, chief scientist at Salesforce.
"Now that this stuff is working, we really need to think about the ethical implications."
Kathy Baxter, an architect in Salesforce's Ethical AI Practice, concurred about the need to ensure such sophisticated technologies do more good than harm, adding: "How do we rebuild software that truly has a positive impact on the people it serves?"
"AI can do so much tremendous good, but it can have the potential to unknowingly harm individuals. We can't expect AI to magically exclude bias in society bias is baked in."
Baxter continued: "How do we represent the world that we want and not the world as it is?"
Given AI essentially needs to learn, it will take its lead from human beings so it's the responsibility of humans to act ethically and do the right thing when it comes to AI development, agreed the panel moderated by Salesforce futurist Peter Schartz.
Baxter stressed that in particular, there's a need to ensure that people are not adversely impacted because of factors they cannot change or control, such as gender or race.
The panel highlighted that it will be just as important to educate people on the shortcomings of AI and potential bias as it is to promote the benefits of smart systems. Ultimately, like with technology today, the results you get out are only as good as the data that's put in. The same is true of AI as it stands now.
"AI will have a bigger impact than the internet on humanity," Socher added. "AI will pick up bias and either amplify it or keep it going. We have to educate people that AI is only as good as the training data."
When it comes to that so-called training data, Baxter said Salesforce recognised its role in boosting awareness and education levels. Using Trailhead, as well as other AI-focused resources, the cloud firm hopes to help open peoples' eyes to the potential and the pitfalls so they can make informed decisions.
"The quality of that training data is key. It helps customers see and understand the data so they can identify if there is any bias there if there are any errors, so they can correct it," Baxter added.
"Ethics is a mindset, not a checklist and we need to instil it early on."
The ultimate law enforcement agency guide to going mobile
Best practices for implementing a mobile device programFree download
The business value of Red Hat OpenShift
Platform cost savings, ROI, and the challenges and opportunities of Red Hat OpenShiftFree download
Managing security and risk across the IT supply chain: A practical approach
Best practices for IT supply chain securityFree download
Digital remote monitoring and dispatch services’ impact on edge computing and data centres
Seven trends redefining remote monitoring and field service dispatch service requirementsFree download