HRUMC 2011 Student Abstracts

 

Nineteen students and five faculty attend the 18th annual Hudson River Undergraduate Math Conference

The Hudson River Undergraduate Mathematics Conference (HRUMC) is a one-day mathematics conference held annually each Spring semester at rotating institutions, and attended by students and faculty from various universities, colleges, and community colleges in New York and New England. The conference features short presentations by students and faculty, along with a longer invited address by a noted mathematician.

Nineteen students and five faculty attended the 18th annual Hudson River Undergraduate Math Conference at Skidmore College in Saratoga Springs, NY. 

Our contingent is the largest from any of the participating colleges this year; sending 4 more students than the closest competitor. Below are the titles and abstracts for the SLU talks.

Lauren Brozowski  An Analysis of Penalty Biases called in the NHL in the Regular Season 2009-2010

Penalties in ice hockey are an important aspect of the game as the consequence of a penalty being drawn can lead to goal and ultimately influence which team wins the game. In this paper, we analyze all the penalties taken during the National Hockey League’s 2009-2010 regular season. As part of our analysis we look at the rate at which penalties were called by each of the leagues referees and linesmen. A few factors we include are the experience a referee has in calling specific penalties as well as the tendencies among the types of penalties for each official. The results of our analysis might be useful to NHL teams to guide their style of play knowing that certain officials will be on the ice for a given game.



James Curro  Why Tournament Rankings Never get it Right

What my project entails is looking at round robin tournaments and how is the best way to rank the teams. I researched a multitude of different ways to rank round robin tournaments and go through the pros and cons in each of them. In Graph Theory we were taught the best method after a season in a round robin tournament in order to rank teams. I took this knowledge and applied it to different NCAA football and basketball rankings to see if it matched my findings. Given that my way is the most fair according to mathematicians I found it very surprising how different my results were than the NCAA. I then used simulation and gave teams a pre ranking and a percentage of how often they would beat teams ranked in the preseasons that were worse than them. Gave very distinct probabilities and tried to see if my preseason rankings matched my rankings after the tournament was run through simulation. After I ran through simulation I analyzed my findings and found some things that were quite surprising.



Nancy Decker  How Cliques make you Sick

Getting a cold this year may not be determined by a virus but by a person's friends. The same disease infecting different social networks can drastically affect the way the disease spreads and also how the disease behaves. Using computer simulations we have compared different social networks. By comparing the same disease over different types of social networks we can observe how the structure of the network affects how many people get the disease, if the disease will terminate in a population and what pattern the disease follows.

Matt Dodge and Robert Romeo  Rock You Like a…Statistician

Guitar Hero is a popular video game where rock enthusiasts can act as Slash, Hendrix or Clapton playing their favorite songs with a guitar-shaped controller. Players attempt to hit sequences of notes at specific times as dictated by the game. If the player hits a wrong note, plays at the incorrect time, or misses the note altogether, the note doesn’t count. As more notes are missed, the in-game spectators respond unfavorably, and the player risks getting booed off the stage before the end of the song. We want to determine if missed notes occur randomly or are grouped together in difficult parts of the song. Thus, we developed methods to address this research question. We then obtained data by allowing undergraduate students and professors to try their hand at becoming a rock legend. By applying our methods to these datasets, and performing simulations to compare how well our different methods perform under a variety of situations, we will be able to evaluate their effectiveness and determine if missed notes follow a pattern.

Sachi Hashimoto  Factoring Zero

In high school algebra, we learned the zero product property, which states that if ab=0, then a=0 or b=0. In this talk we look at rings where the zero product property does not always hold, meaning that we may factor zero into a product of two nonzero ring elements. Given such a ring, we can define the zero divisor graph of that ring to be a grph with zero divisor as vertices, where two vertices a and b share an edge if and only if ab=0. The study of zero divisor graphs is a relatively recent topic of interest. In this talk we will look at various classes of rings and their zero divisor graphs, and examine certain graphs to see whether they can be realized as the zero divisor graph of some ring.

Anne Lawless  Analyzing Exotic Amazonian Bird Foraging

God gives every bird its food, but he doesn’t throw it in their nest” – J. G. Holland. We have information on different species of Ant-Following Amazonian birds and their competitive eating habits. As the data contains small counts, typical methods such as ANOVA are not appropriate. The eating habits of these birds can be modeled using Poisson regression. Further, a new multiple comparison technique extending the concept of Tukey’s HSD to Poisson regression has been developed to discover significant differences in their mean success foraging rates.

Daniel M. Look (Faculty)  Statistics in the Hyborian Age: An Introduction to Stylometry via Conan the Barbarian

Stylometry is the study and quantification of writing style, often using statistical methods. Stylometric techniques can be used in conjunction with more conventional means to determine the authorship of a contested work. Most notably, this was used to help solidify the authorship attribution for the anonymous Federalist Papers. We will demonstrate these techniques using stories featuring Conan the Barbarian as penned by Robert E. Howard, L. Sprague de Camp, and Lin Carter.

Nicole Martin  Words of Today Compared to Terminology of Yesterday

Words that were common in the past have often been replaced with new words. For the past couple of years Google has imputed books full of words into their data base to measure their usage. In this project we will be looking at the usage of the words “lunch”, “dinner”, and “supper”. These three words refer to meals taken at different times of the day and because of cultural changes their usage patterns fluctuate over time. We will filter these three words, with capital beginnings, lowercase beginnings and plurals from the Google labs data of 470 million lines into a smaller data set. We apply smoothers such as spline and loess to investigate patterns between these words.

Caitlin McArdle  A New Perspective on Likert Data

Likert-style questions are commonly found on questionnaires, especially in the field of psychology. Likert questions typically have five responses; “strongly disagree”, “disagree”, “neither agree or disagree”, “agree”, and “strongly agree”. For data analysis these responses are then coded as numbers one through five. Oftentimes, this data is then analyzed in a way that does not correspond with what the responses actually represent. There is much debate over how to properly analyze data produced by Likert questions. This talk will discuss common methods for analyzing Likert data, as well as, describing a new method for investigating relationships between respondents and questions.

Ryan Meyer  Reliability of Missile Systems:An Application of Bayesian Statistics

Reliability is the statistical conclusion of how well a system will work based off several factors. In my examination of this concept I will use a binary response dataset obtained by destructively testing missiles to evaluate their reliability, and I will focus primarily on the Bayesian methods and use these methods to create a simulation to predict the overall reliability of the missile population at a desired age. During this analysis I will also comment on where the Frequentists, or classical statistics, differ from the Bayesians and why one would want to use one over the other.

Waled Murshed  Introduction to Survival Analysis

Estimating the survival function and making predications has been a major interest in many statistical fields, including medical research/statistic. A very popular method used to estimate the survival function and a statistical test for comparing survival distributions is the product-limit method, also known as the Kaplan-Meier method. Furthermore, a proportional hazards model, more specifically the Cox model, is used for more in depth analysis. This talk will introduce these and several other aspects of survival analysis, as well as apply these methods to several data sets like “Time to First Recurrence of a Tumor in Bladder Cancer Patients”.

Jeremy Mwenda  Java, XML, and Web Services for Small-ScaleWeb Applications

This project explores using Java, XML, and web services to develop small-scale web applications that require storing data as well as validating the data. When building applications that involve storing and accessing data, one would consider using a database. However, if the application being developed does not involve large amounts of data, then using a database is not necessary. To use a database, one would have to incur an extra cost of setting up a database server as well as learning a database access language (such as SQL). To avoid this cost, one can use XML to store data. Some small-scale applications might also need some functions (or methods) that are costly to implement. For instance, when developing an application that requires validating user data such as addresses, one would need a function that compares a user address with a database of real-time user addresses. Getting access to real-time data as well as compiling that data into a database could be very costly. This is where web services become useful. A business that offers address-validation web service, for instance, is able to cover the costs of creating the web service by using the same database of addresses to provide validation functionality to many clients. To use the web service, the client only needs to know how to communicate with the web service and how to invoke its functionality. In this project, I use XML to store user data and SOAP web services to check the validity of some of the user data such as address and email.

Hau Nguyen  Bayesian vs. Frequentist Approaches to Modeling Seal Populations

Our classical approach in statistics, the Frequentist method, is based on repeated random sampling with fixed parameters to test hypotheses and form confidence intervals. The Bayesian approach to statistics differs in that the parameters are treated as random variables that can be modeled according to some distribution. Although the ways they are conducted may seem contradictory, their applications should be complimentary; their usefulness depends on how we want to approach the data and the models. In my research, I illustrate those differences by comparing the results that I obtain from performing a Poisson regression analysis of harbor seal haul-outs in Ireland using both the Frequentist and Bayesian approaches.

Matthew Raley  Modeling the Dow Jones Industrial Average using Time Series Analysis

Cyclical by nature, the economy of the United States is constantly changing. Stock market indices depict both expansionary and recessionary trends in the economy. I use multiple linear regression and time series analyses, and incorporating the statistical bootstrap to model monthly movements in the Dow Jones Industrial Average (DJIA) based on multiple economic indicators; West Texas Intermediate (WTI) Crude Oil Spot Prices, Gold Spot Prices, Unemployment Rates, Federal Funds Rates, and Housing Starts.

Melissa Rogers  How  Do I Color Thee? Let Me Count the Ways

The chromatic polynomial was first introduced exactly 99 years ago as a potential tool for proving the four-color theorem and has been studied extensively since then. In this talk I will define and discuss the characteristics of the chromatic polynomials of graphs. I will illustrate the concept on simple graphs, then introduce the contraction-deletion argument that allows one to determine the chromatic polynomial of any graph. Finally, I will investigate two infinite families of graphs that have identical chromatic polynomials and discuss the importance of chromatic uniqueness.

Somphone Sonenarong  Hamilton and the Discovery of Quaternion

In 1843, Sir William Rowan Hamilton inscribed i2 = j2 = k2= ijk = -1 onto the Brougham Bridge, Dublin prior to attending a council meeting at the Royal Irish Academy. The above inscription represents the Hamilton’s discovery of Quaternions, a number system that extends