Bringing the ‘Science’ to Social Sciences: A Workshop on Using AI Techniques in Arts and Humanities Research

Blog
30 mai 2021
Auteur(s) :
Valerie Leow, J.D. Candidate, University of Alberta

Congress 2021 blog edition 

By Valerie Leow, J.D. Candidate, University of Alberta 

Geared towards researchers who are keen on experimenting with generative artificial intelligence (AI) in their research, the two-hour workshop, “Using Generative AI Techniques in the Arts and Humanities,” aimed to show participants that it is possible to train new artificial intelligence (AI) to generate text, sound, and images by giving them the chance to experiment with code that generates text from provided training materials. This workshop was led by University of Alberta PhD student Paolo Verdini, hosted by the University of Alberta Signature Area, AI4Society, and organized by AI4Society Associate Director Geoffrey Rockwell. 

To start things off, what are recurrent neural networks (RNNs) anyway? Verdini suggested visualizing it as more akin to a human brain than traditional machine learning algorithms. Imagine a human being reading over a piece of text. For example, if you are reading a particularly complex book, you may stop several times throughout the book for a break, and pick it up again after. Whatever your reading process looks like, at the end of your first read of the book, chances are that you only have a basic conceptual understanding of the overview of the book or have even forgotten at least some material – you only have partial understanding of the information in the book. Upon your second or third re-read of the book, you may grow to understand more, and your mastery over the book’s contents will likely only increase with each subsequent re-read. A neural network works in much the same way – save for the fact that it does not have the luxury to ‘take breaks’ or stop and pick up the book again at a later date or time, of course. It goes through the data that you feed it over and over, as many times as you instruct it to. Each time it does so, your model will get more optimal. Additionally, according to Verdini, the larger or bigger the data that you feed your neural network, the better the model will be. 

This workshop was conducted via Google Colaboratory (or ‘Google Colab’ for short), a free online cloud-based Jupyter notebook environment that allows you to train machine learning and deep learning models on CPUs (central processing unit), GPUs (graphical processing unit), and TPUs (tensor processing unit), the latter two of which Google Colaboratory offers for free. To put it simply by Verdini, it lets you create code live and online without having to install any code – programming or otherwise – on your computer itself, and with the advantage of being able to easily share your code with others. 

During the workshop, participants were first introduced to the Python programming language by learning to program it to combine pre-defined phrases together into randomly generated basic sentences. The second part of the workshop consisted of a hands-on tutorial for training and using generative AI techniques through two approaches: the first, and comparatively more simple, approach involved training a standard recurrent neural network (RNN) model with Lewis Carroll’s Alice in Wonderland and using the model to generate a new paragraph for Alice in Wonderland; the second, and comparatively more complex, approach involved training a state-of-the-art model for AI text generation, Gpt-2, with Humanities and Arts paper titles, and using the model to generate random Humanities and Arts paper titles. 

Want to try your hand at incorporating generative AI into your own research, or even for your own general use? Try Google Colab yourself for free by clicking on the following link: https://bit.ly/3p0hbmh