The field of neuroergonomics has long aimed at using data from a multitude of brain scans and tests to improve and understand human lives. In the past, there has been work in classifying human emotions, using mental models to predict human stress, and improving brain functions via electrical stimulation. Recently, as with many similar fields, the focus has shifted towards heavy utilization of machine learning techniques and algorithms. Although past work has leaned slightly towards the use of CNN’s and SVM’s, as a relatively new and niche field, there are no set conventions or algorithms to use. Therefore, we plan to explore many different approaches in this project, using many different datasets.
We are creating a model, with the use of machine learning, that is capable of identifying and highlighting road cracks. It takes as input aerial images of roads, such as highways, and outputs a segmentation mask/map, or highlighting, of all detected cracks in the road. This is for the purpose of aiding infrastructural repair and monitoring. We are developing an image classification model, and an image segmentation model for the purpose of completing this task.
The program our group is attempting to make will take high resolution electron microscopy images and calculate and display multiple physical characteristics such as intensity and strain data. The program will have a GUI compatible with Linux systems and allow the user to determine what actions to take. New features include a new library function for the diffraction pattern, heatmap scaling, and overall fixes.
To help the libraries with classifying documents, Professor Lowe has tasked us with creating a machine learning algorithm that can classify research articles based off of their abstracts into either basic or applied research, with the intention of expanding the project into further classification of research by department or subject matter and beyond. To address this problem, the project team has decided to use a supervised learning approach to train a sentiment analysis algorithm to determine how to classify each document.
Science-fiction books often contain mentions and elaborations of speculative technologies years before their realization. Due to the copious amounts of literature that would be needed to be cycled through for information regarding speculative technology, it would be difficult to go through all that information without the help of machine learning.
Flooding is the most common natural disaster and kills more people each year than tornadoes, hurricanes or lightning in the US. One way to better manage this increasing flood risk, is to increase the situational awareness so the decision makers and residents living in the affected areas can stay informed on new developments.
This page is dedicated to the tidal team at Texas A&M
The tidaltamu team is focused on bridging the gap between undergraduate students and graduate-level research and corporation projects. We hope to get into contact with many professors and industry professionals to have many projects which we can offer to our members. Our goal is to provide the undergraduate students with experience out of the classroom, well-preparing them for internships and professional jobs in the fields of materials science, aerospace engineering, mechanical engineering, and more to come.