Tuesday, September 24, 2019

Historical GIS

Hello All!
Welcome back. This week's focus is on Historical GIS.


Historical GIS is situating history in its geographical context in ways to make it illuminate the past (Knowles, 2008:3), essentially incorporating the when with the where. Historical GIS takes qualitative, quantitative, and visual evidence and allows the researcher to create spatial relationships otherwise unseen.  

Teaching GIS helps students refine their problem-solving skills. As Knowles (2008) points out, students not only are faced with solving a problem, but also tasked with identifying that problem and articulating it. Additionally, GIS allows the user to create maps or add layers to imported maps as a means to understand space and place in an ever-changing environment, which provide another method to help support or debunk literary arguments. Teaching students how to use GIS enables them to learn how to create and organize files, build databases, write queries, and use graphics as a means of communication. While learning GIS is akin to learning another language – or at least adding another tool to your methodological tool belt, it is certainly a handy, transferable skill that can be useful in other facets of study and/or life. 

The ability to take geographical information from a multitude of textual sources and map it out for visual interpretation makes one scratch their heads as to why historians haven’t previously been congregating to the GIS bandwagon. One of the difficulties is a general lack of information. History is fraught with missing, contradictory, or otherwise seemingly ambiguous data. One problem I am beginning to see is that sometimes Historians take things at face value (Sorry Historians!) One such example is the Peutinger Map of the Roman World (Figure 1). The surviving copy is likely missing a segment or three and has been for a century or more believed to depict a, for a lack of a better adjective, pretty, “route diagram” (Knowles, 2008:200). However, after digitizing the map and analyzing the different layers, it is now believed to depict the actual geography surrounding these routes. Prior to utilizing digital software to analyze the map segments, the map was brushed off as a unique yet fairly typical route itinerary that was often seen in manuscripts. While this is not really an implication of GIS, it does show how something so simple as a map could be misunderstood for ages prior to GIS analysis.

Figure 1: Peutinger Map. Creative Commons 1.0 Universal Public Domain Dedication (CC0 1.0)






Due to minimally recorded and forgotten weather patterns prior to the 1930s, the Dust Bowl was assumed to have been caused by over farming and land mismanagement. However, mapping the involved counties of Colorado, Kansas, Oklahoma, Texas, and New Mexico shed light on an assortment of complexities that lead to it. Researchers mapped annual rainfall, percent of counties turned to cropland, rainfall difference during drought periods, and locations of dust storms within the county areas. They also mapped newspaper descriptions of dust storm observations between 1854 to 1896 in Kansas. They found that while farming likely didn’t help the situation, dust storms had been happening fairly regularly in the region but were worse during periods of significant drought. The Dust Bowl was such an event because it was reported much more frequently than in the past and due to a decade long drought that made the situation worse (Figure 2).

Figure 2: Dust Storm in Texas, 1935. Image Credit: Wikimedia Commons, Public Domain.


The point of the case studies to my initial argument about missing contextual information in history is that without zooming out to look at a larger picture, you only see bits and pieces of history – providing an interpretation of an even more incomplete picture.

I think my above case studies show how GIS can and is changing historical scholarship. Using these methods to look at historical text, imagery, etc. allows historians to ask different sets of questions, make new hypotheses, adding to debate and furthering the discipline. While GIS is not a perfect application for all historical needs, it does allow understanding of new perspectives and interpretations of historical events, place, and space.


Thanks for stopping by! I hope you enjoyed your visit.

-The Migrant Isotopist


Knowles, K.A. (2008). Placing History. Redlands, California: ESRI Press.

Tuesday, September 17, 2019

Text-Mining and Topic-Mining



Hello all! Welcome back! 
This week I am talking about distance reading and text mining. 


Distance reading is looking broadly across articles or other documents and being able to pull out patterns. This approach would be extremely useful when you are beginning to start research and need to see the frequency of how terms are used or are trying to determine which synonyms are worthy of investigation. This can approach can also help navigate hidden tones and give the perceptions of the bias of the time. One example of this is looking back at reports during the US Civil War.

I thought Ayers (2011) New York Times article was really fascinating. As far as US history goes, the Civil War era is my favorite to read and learn about. He stated that using computer aided technology helps to gain a better understanding of the region from large amounts of sources. Utilizing these methods can elucidate alternative conclusions. In the Ayers (2011) article, he was able to identify a different “primary cause” of the Civil War. It is interesting that these computer-aided tools can help to uncover patterns that are otherwise difficult to see.

Another interesting way to dig through a lot of information is by text mining. Text mining looks at the frequency of words or topics in a certain period of time. Ewing et al., (2014) provide an excellent example in their article on the flu epidemic. They used two text-mining methods: topic modeling and tone classification. Through their work they were able to uncover how often different words were used in reports during different stretches of time within the local community and outside of it. They also looked at the tone of newspaper reports about the flu. Ewing et al., (2014) developed four classifications: alarmist, warning, reassuring, and explanatory. This was created to determine how the tone in reporting prompted public health intervention. Through this exercise, they were able to see the tone shift in reporting from the beginning of the epidemic to the end.    

After reading these articles and beginning to understand their use, I wanted to see how I could employ these methods for my own research. I attempted to use three difference tools, Google Ngram, Voyant, and JStor Data for Research. While playing around with Google Ngram and Voyant was fun, I was unable to figure out how to utilize JStor Data for Research. This is more likely an issue on my end than with the tool itself.

The Google Ngram Viewer is a pretty cool tool. As a test, I used the phrases Beauty and the Beast, Cinderella, Snow White, and Rapunzel. Then I viewed the differences of the phrases between the corpus of American English, British English, French, and German. This was interesting that there would be such a difference (as seen below). 








Then I wanted to see the results of terms I would use in my own research, so I included peasant, Christian, and religion. Again, I used English, French, and German. The reason for this is that information regarding my research is not likely to be found in English, but rather French or German, so, I wanted to see if there were changes and if I could reveal anything about them (see below). I wonder how searching in the French or German language would change the results.






Voyant provides a really cool visual of the most often words used in a document. For this example, I used my Master’s thesis to create a word cloud with the top 55 words (see below). While these tools are undoubtably useful (and pretty interesting), I am not sure how useful they would be to my own research, but it is definitely worth investigating!



Thanks for stopping by! I hope you enjoyed your visit.

-The Migrant Isotopist 


Articles I included in case you want to check them out for yourself:

Website links to the tools I used if you want to play around with it (warning, it can be addicting):

Tuesday, September 10, 2019

What is Digital History? A response.


Hello all! 
Welcome back! This week I am responding to a variety of questions surrounding digital history, digital humanities, and basically how to move digital history forward. 

What is digital history?
Digital history uses technology and the internet to disseminate historical information. Historical documents, images, etc., represented in a digital space. Seefeldt and Thomas (2009) discuss how the concept of digital history broadly encompasses the examination and representation of the past through the new technologies of the computer, internet, and software. It seems that digital history can be defined loosely as anything historical that has been in some way altered in a digital way. Altering can include a scan of a document to upload onto a computer.

How does 21st century digital history theory/practice differ from earlier applications of computer technology to historical research, such as the data-driven quantitative history (cliometrics) of the 1970s?
Cliometrics is the quantitative analysis of history. Thomas (2004) explained that in the past (mostly the 1970s), some historians jumped on the quantitative band wagon; however, they mostly only looked at their datasets to explain events rather than including any external information or additional historical evidence. Today, quantitative methods are used as a supplement to historical data and as a support tool to defend arguments or explain hypotheses.  

How does digital history differ from digital humanities?
The digital humanities and digital history are similar but have differences. Robertson (2016) uses the analogies of a tent and a room in a house to discuss the differences. Digital humanities, he argues, has become a sort of catch all under what he calls, a big tent. Digital humanists work often in collaboration with one another. Robertson (2016) stated that jobs, conferences, and coursework all include some degree of proficiency in digital technology. The author believes that instead of the catch all digital technology tent, it should be more like a house, with rooms specializing in certain areas. Enter, digital history. Historians are increasingly utilizing various technological tools available to explore historical data and events. Tools like mapping (e.g. GIS), text analysis, image analysis, 3D modeling, and the evermore importance of social media and the internet, have been quite useful for historians and history projects. However, with an assortment of tools at their fingertips, historians still do not fully utilize what is available. Which is funny, considering how going digital would greatly benefit historians. Ayers (1999) believed that history was best suited for the digital world as it could connect to larger and more diverse audiences. From a research standpoint, having more eyes on a topic from different perspectives/lenses would allow a better understanding of an event/period in time.   

What are the promises/perils of doing digital history?
Cohen and Rosenzweig (2006) discussed that the advantages are that we can do more. People have increased access to history. More information can be stored. However, Ayers (1999) suggested that digital archives create opportunities and problems. I think what he was trying to point out was that even though it is useful to have information stored in a database, easily accessible, that it is even easier then to have the opportunity to miss connections. The past is complicated, complex, filled with layers. However, I think he believed that it would be more difficult to see these complex nuances with so much information to riffle through. Cohen and Rosenzweig (2006) believed that problems also included issues surrounding authenticity and quality of information. They illustrated that movement of forgery is easier online. They provided an image that had been forged well before the introduction of the World Wide Web. Even then, it was easy to create a forgery; however, with the internet, forgeries and false information can be found easily among legitimate sources. They used the Google search as a means of searching for information and finding both real and false information within the search results. Although they make an excellent point that false information is widely distributed and it is difficult to police it, I think one way that this issue can be combatted is through education. By teaching students how to search for information and what sources are credible/how to look for credible sources, can help to curb the spread.

Can we make digital history, as a field, more inclusive?
I believe, yes, it can be more inclusive. However, like Leon (2017) and Brennan (2019) suggest, historians need to include and illuminate the work of women and people of color. Not only that but also working to understand the role of underrepresented populations in history, rather than just trying to beat the privileged white male point-of-view to death. History is the interwoven workings of all people, and all people have a story. This intertwining is what needs to be unraveled in order to understand a topic more holistically. In order for this to be done, it needs to be studied under different lenses, which only diversity can provide

If anyone would like to have a discussion on anything I discussed, please send me feedback and I'll be have to have an intelligent debate. 😊

Thanks for stopping by! I hope you enjoyed your visit.

-The Migrant Isotopist

Tuesday, September 3, 2019

The Digital Humanities: My Reflection


Hello reader(s)!

This is my first post on my blog (sorry in advance! 😬). This entry is devoted as a reflection of the book:  The Digital Humanities: A Primer for Students and Scholars. By: Eileen Gardiner and Ronald G. Musto.

The authors discuss the evolution of written text. The first “humanists” focused on the revival of grammar and rhetoric from the “pure” era rather than their “corrupt” years during what we now call the Medieval period. Reviewing ancient Greek and Latin texts, these humanists deciphered, edited, and disseminated printed works throughout Europe.  It is through the Medieval libraries where we begin to see the beginnings of the “modern” library. Libraries of the 20th century eventually moved away from card catalogs and microfiche and into the digital age. Materials that have been digitized are objects, artifacts, images, sounds, and spaces. Those in charge of deciding what materials were initially important enough and selected for digitizing was up to the librarians and archivists.

This leads me to the main question I had throughout the book. Who had/has access to materials? Many of these digital goods are often behind paywalls. True, there are (now) plenty of open source digital media available on the internet, but in the early digital days, lots of information was available to those who could pay for it. Let’s take this accessibility issue one step further. Since the Medieval period, texts were translated in and from Greek and Latin. Meaning that only people who could read Latin/Greek/whatever local language materials were translated in, had access. Today, there is a lot of information that is translated into English; however, not everyone can read English, or the materials in the original language.

Let me get side-tracked to put this in relatable terms. We all know what Wikipedia is. We can’t Google something without seeing something pop up from Wikipedia. But who has access to the information from Wikipedia? In one of my master’s courses, we learned a lot about Wikipedia, and one thing I learned was that not everyone has the same access to information as everyone else. When I looked at a Wikipedia page, I would often find myself looking to see what languages that page was translated to. Sometimes very little existed to people in other languages. Other times, very little existed to English speakers (Take examples of the Ba’ja page and the Normandy page as contrasting examples of the amount of information available to people).

Getting back to the book, let me talk a moment about the benefits of access. The authors state that museums, who initially put their digitized collections behind a paywall, have found that providing access to digital information online has helped broaden the study of objects, artifacts, images, sounds, and spaces. As technology continues to improve, editing has become close to a true representation, allowing individuals who cannot physically go see the item in person, a chance to study it online. Having the ability to manipulate images allows people to view materials in ways they might not be allowed to by visiting the museum (as you typically cannot touch the artifacts 🤷).

Moving on, another thing I thought about during my reading was the costs associated with digitizing materials and storage. It costs time and money to scan texts, images, etc. It costs money to maintain the websites and the servers that hold these collections. Going forward, how will people work to keep up with the advancement of technology and these storage issues?

The book “The Digital Humanities: A Primer for Students and Scholars” was an interesting and thought provoking read, as you can see from my rant above. The book talked about more than just access and storage, but this is what I chose to focus on for this first posting.

I hope you have enjoyed your visit!

-The Migrant Isotopist