Hopefully, this
is my last thought for this week’s post, so I don’t have to revisit it and modify
it like I add with my post for last week.
I had to
look Matthew Wilkens up, and it turned out that he is specialized in
contemporary American fiction and in digital and computational literary studies,
which really made a lot of sense with regard to his essay. I know for a fact
that one of my old professors would totally disagree with Wilkens to the point
of even rejecting the argument of his essay and perhaps the overall content, but
not me. I have always viewed literary analysis and cultural analysis as
inseparable type of studies and should always react with one another and answer
to each other. In his essay, Wilkens touches on this issue in a very
interesting way, which relates to the discourse of digital humanities and at the
same time to some of our previous readings.
Wilkens starts off his essay with the everlasting problem of the canon. In my understanding, the canon has always been a problematic issue that resists solution. Wilkens admits to this problematic issue and explains that the canon, in its broadest sense, will always exist. From the canon, Wilkens descends to reading and questions the question, what do we really read and how is reading adding to our knowledge about the past and the present around us? It is obvious that we are in an age of abundance and it seems to be as such for the past decades and most likely will continue to be for the future. Wilkens believes that the problem of abundance has created a problem for us of not being able “to keep up with [the] contemporary literary production” (251). Therefore, the question changes from what to read to what “to ignore”? Wilkens argues that what we read is “nonrepresentative of the full field of literary and cultural production” (251). And so he proposes that we should reconsider our positions of the notion of close reading.
Wilkens
argues that “[w]e need to do less close reading and more of anything and
everything else that might help us extract information from and about texts as
indicators of larger cultural issues” (251). As with everyday that goes by, I see
the urge and the need to adapt to Wilkens’s view. To support his argument, Wilkens
draws an example from history that is facilitated through the use of digitization.
In his discussion of “Text Extraction and Mapping,” I think Wilkens is able to
prove his argument of how spending less time in close reading texts and
focusing more on the general context of those texts, we will be able to
understand history and culture better. With the specific example he uses, I think
we ought to reconsider our previous understanding of American regionalism in
the 19th century.
Wilkens’s example
relates to our previous readings in a very interesting way. From two weeks ago,
Mussell argued that through digitizing humanities, we will be able to understand
the past better. Well, here, I think it is evident with Wilkens’s example since
through digitization of the 19th century American texts, the
software is able to gather the data of places from texts published in a single
year, and produces a statistic figure that has changed our views of the past in
understanding American regionalism.
(I don’t
know, but I feel the further we move in our course, the better loose ends
become tied up together very neatly)
Since I will
be presenting on the websites part of our class tonight, I feel there is no
need for me to discuss them here. But I will post the handout that I intend to
give out tonight with discussion questions.
No comments:
Post a Comment