Last summer, senior Mason Wang, already experienced with research and artificial intelligence, reached out to see if he could help two Stanford professors with Noora, a project designed to create a chatbot that assists those with autism spectrum disorder (ASD). To his surprise, they agreed to bring him on board.
Noora is a chatbot that presents ASD patients with a variety of social scenarios and gives feedback to their various responses to help them better understand how to hold conversations or respond to questions they don’t understand.
Despite his previous experience in research, Wang had to learn a lot to work alongside professors Monica Lam, who specializes in natural language processing, and Lynn Kern Koegel, who specializes in psychology.
“Overall, when working with my professors, I learned a great deal on how to ask questions,” Wang said. “For example, I learned what to question when we get experimental results, what questions to ask when we look at a new problem and how to read research papers and question them.”
To contribute to Noora, Wang had to learn how to prompt a Generative Pre-Trained Transformer (GPT), which is a neural network that already knows how to process and respond to words, but can be fine-tuned by users. By using a GPT, Wang no longer has to create a chatbot from scratch, but can instead modify an existing one to properly respond and connect to ASD patients.
“We’re shifting away from this paradigm of gathering data and training a model on it and instead learning how to prompt a GPT,” he said.
So far, the team has held informal tests with a few ASD patients and used their observations to iterate and improve Noora’s responses. For example, during initial testing, Wang noticed that patients would occasionally skip through large blocks of text, so he modified Noora to read aloud all of the responses instead of just displaying it on a screen.
To gather data to report in a paper with human subjects, he and his team must get approved by the Institutional Review Board (IRB). Currently, Noora is hoping to hold formal trial runs with a group of patients near the end of summer if approval comes through.
Beyond its usage in research and clinical settings like Noora, Wang also believes that GPTs will be the next great innovation in technology evolution.
“GPTs are pretty revolutionary to the point where the majority of people don’t even have an idea how revolutionary it is,” he said. “I think in 20 years, everyone will be using generative Artificial Intelligence (AI), even schools.”
Wang has been working on Noora for the past year, spending his entire summer after junior year prototyping and testing his initial ideas. Now, since college applications are over, he is spending more time preparing Noora for the testing phase.
Compared to the theoretical aspects of his other research projects, like his dive into SFTs, Wang feels that the practicality of Noora is more interesting. Inspired by his experiences with Noora, he plans to major in computer science and minor in psychology at Stanford University.
“The theoretical things are very intellectually stimulating and satisfying to understand, but it’s the practical building and testing [of the chatbot] that I love,” he said.
Apart from Noora, Wang is also working on Hazel, his AI startup focused on helping realtors streamline their work. He founded this company a couple months ago with other incoming students to Stanford, and OVALChat, which would allow GPTs to incorporate Wikipedia into their responses to give more accurate, factual information. He is so invested in the work of the startup that he will take a gap year next year to further work on it.
Wang got his first taste of research during his freshman year working with Class of 2022 alumnus and current Harvard freshman Vignav Ramesh. They collaborated on a project for HackMIT, an event where students compete and show off recent software or hardware projects. Their project, Latent Space, used recurrent autoencoders, a type of neural network that specializes in data optimization, to compress and send audio files across the internet.
Theoretically, this process would allow many widely used video calling platforms such as Zoom or Google Meet to reduce delay between one side speaking and the sound being heard by the other users. Later, the two wrote, but did not publish, a brief to document their findings and results from implementing the neural network.
“Latent Space was born from other researchers’ discoveries, as were so many of my other projects,” Wang said. “After engaging with so many other people’s research, I was inspired to conduct my own.”
In his sophomore summer, Wang attended the Research Mentorship Program (RMP) at UC Santa Barbara, where he worked under a doctoral student. Together, they created two projects and wrote a paper exploring sparse fusion transformers (SFTs). SFTs are an improvement to current transformer neural networks, a type of network better suited for processing text, aimed at reducing computational costs and memory use.
“I initially got into it [AI], because I thought it was cool, and I think that’s a good enough reason to just try something out,” Wang said. “Now, I’m convinced that AI is the future, and I’m excited to see where it goes.”