Arijit (Ari) Sen encouraged the use of Artificial Intelligence to augment reporters and humans, but not to replace them during his Pulitzer Center Crisis Reporting Lecture held on Sept. 28, at the Lewis and Clark room located in South Dakota State University’s Student Union.
Sen, an award-winning computational journalist at The Dallas Morning News and a former A.I. accountability fellow at the Pulitzer Center, discussed ways A.I. could be used and potential risks of A.I. in journalism.
“A.I. is often vaguely defined, and I think if you listen to some people, it’s like the greatest thing since slice bread,” Sen said as he proceeded to quote Sundar Pichai, CEO of Google’s parent company, Alphabet, stating A.I. as the most profound technology humanity is working on.
According to Sen, A.I. is basically ‘machine learning’ that teaches computers to use fancy math for finding patterns in data. Once the model is trained, it can be used to generate a number and predict something by putting things into categories.
Sen feels that a rather important question to focus on is how A.I. is being used in the real world and what are the real harms happening to people from this technology.
“There is a really interesting thing happening right now. Probably about since 2015, A.I. is starting to be used in investigative journalism specifically,” Sen said, as he speaks about a story reported by the Atlanta Journal-Constitution (AJC) on doctors and sex abuse, where around 100,000 disciplinary complaints had been received against doctors. Due to numerous amounts of complaints, AJC used a machinery model to train and feed in data about complaints that were related to sexual assault and those that were not related to make it easier for them and compile a story.
Although, A.I. can prove to be useful for investigative journalism, Sen explained about the risks of A.I. technology and questions pertaining to people behind the model. He talked about factors about people who label data, intentions of the A.I. creator and humans working on the same content with a longer time frame.
“The other question we need to think about when working with an A.I. model is asking if a human could do the same thing if we gave them unlimited amount of time on a task,” Sen said. “And if the answer is no, then what makes us think that an A.I. model could do the same thing.”
Sen further elaborated on A.I. bias and fairness by bringing in another case study of how Amazon scrapped its secret A.I. recruiting tool after it showed bias against women. Amazon used its current engineers resume as training data to recruit people; however, they realized that most of their existing engineers were men which caused the A.I. to have a bias against women and rank them worse than male candidates.
“One of the cool things about A.I. in accountability reporting is that we’re often using A.I. to investigate A.I.,” Sen said as he dives into his major case study on the Social Sentinel.
Sen described Social Sentinel, now known as Navigate360, an A.I. social media monitoring technology tool used by schools and colleges to scan for threats of suicides and shootings.
“Well, I was a student, just like all of you at University of North Carolina at Chapel Hill (UNC) and there were these protests going on,” Sen said. “You know, I being the curious journalist that I was, I wanted to know what the police were saying to each other behind the scenes.”
Sen’s curiosity led to him putting in a bunch of records requests and receiving around 1,000 pages in the beginning. He ended up finding a contract between his college and Social Sentinel that led him to wonder about his college using a ‘sketchy’ A.I. tool. Sen landed an internship at NBC and wrote the story which had been published in Dec. 2019.
“Around that time, I was applying for journalism at grad school, and I mentioned this in my application at Berkeley,” Sen said. “I was like, this is why I want to go to grad school; I want two years to report this out because I knew that straight out of undergrad no one was going to hire me to do that story.”
He recalls that he spent his first year doing a clip search on reading about Social Sentinel and found out about no one looking at colleges, which he stated was ‘weird’ as the company had been started by two college campus police chiefs. The remainder of time he spent was calling colleges and writing story pitches.
Sen added details on his second year at Berkeley, where he was paired up with his thesis advisor David Barstow and conducted tons of record requests from all over the country for at least 36 colleges and every four-year college in Texas.
“We ended up with more than 56,000 pages of documents by the end of the process,” Sen exclaimed.
After having all documents prepared, Sen went on to build databases in spreadsheets, and analyzed Social Sentinels alerts sent as PDFs. He later began analyzing tweets to check for threatening content and look for common words after filtering out punctuation and common words.
“You can see the most common word used was ‘shooting’ and you can see that would make sense,” Sen said. “But a lot of times ‘shooting’ meant like ‘shooting the basketball’ and things like that.”
With all this information acquired, Sen got going on speaking with experts, former company employees of Social Sentinel, colleges that used the service, students and activists who were surveilled.
Through this reporting, Sen came up with three findings. One, being the major was that the tool not really being used effectively to prevent suicide and shootings but was used to monitor protests and activists. Second, Social Sentinel was trying to expand beyond social media such as Gmail, Outlook etc. Lastly, the tool showed little evidence of lives saved, although Social Sentinel claimed that they were doing great.
Sen concluded that the impact of the story reached various media houses who later published on A.I. monitoring student activities and eventually, UNC stopped using the service. Sen later took on questions from the audience.
According to Joshua Westwick, director for the School of Communication and Journalism, the lecture was timely, especially considering the increased conversations about AI.
“Ari Sen’s lecture was both engaging and informative. The examples that he shared illuminated the opportunities and challenges of AI.” Westwick said. “I am so grateful we could host Ari through our partnership with the Pulitzer Center.”
Westwick further explained that the lecture was exceptionally important for students and attendees as A.I. is present throughout many different aspects of our lives.
“As journalists and consumers, we need to understand the nuances of this technology,” Westwick said. “Specifically, for our journalism students, understanding the technology and how to better report on this technology will be important in their future careers.”
Greta Goede, editor-in-chief for the Collegian, described the lecture as one of the best lectures she has attended. She explained how the lecture was beneficial to her as Sen spoke about investigative journalism and how to look up key documents before writing a story.
“He (Sen) talked a lot about how to get data and how to organize it, which was really interesting to me since I will need to learn those skills as I go further into my career,” Goede said. “I thought it was a great lecture and enjoyed attending.”