Fall 2024
I picked project 1 because the emoji part of the research gave an interesting result which disapprove some part of the hypothesis. Both parts of the research would be much better if they extend to a bigger scale.
I am fascinated by the authors' dedication to designing the chat box. The huge amount of work put into the chat box is crazy. I also firmly believed in the use of this project. It not only has research usage, but also can be use in business area. I believe this is very important in computer science research since a lot of computer sicence research can be realized in to business. Based on my personal experience with Siri, the results of this research are proven.
I selected this group because their overall research methodologies were sound. They had an advantage because they had already brought in existing research, but even though they were not able to achieve their overarching goal of predicting different diseases using these leads, they followed each step research step thoroughly.
Compared to other projects, their projects are more completed.
Although I am not a biology major, the work is simply incredible for the gan model. As a project manager who manages data, I am fascinated by the ways they show their data. I learnt a lot from them on ways to clean data show data and select data from their research experience.
The area of research sounds very beneficial. The oral presentation seemed that they are very well prepared, and the purpose of the project was explained very well to me during the poster presentation.
Very complete project, with large dataset pulled together to support. Good to see critical analysis of the evaluation step even
Has real life applications and remarkable results
I am choosing this project secondly because a I find it interesting. The findings are not what I would expect and also highlight a chance for improvement in MLM. From their presentation and poster, it sounds like the authors found an area in MLM that has largely been ignored/not properly developed and conducted experiments that identified that problem, proposed two solutions, and then found the resulting best solution. They also incorporated multiple languages into their research which is impressive. While some other projects like could likely have a bigger impact on the research community due to it's involvement in chronic disease and health, this work, I think, was a better overall project because of their research and proposed solutions. I think this work will end up holding more weight within the research community than the other papers because these three actually solved a real problem in MLM despite the time and resource constraints.
I think this project is a well constructed project with interesting result. This research has lots of meanings in terms of the "must do" of basically all NLP researches in the future. Also, the general structure of the research is clear whether in the presentation or on the poster.
Really enjoyed listening to both the slides and poster presentation for this project as the relevance of the contributions was very clear to me—current gender bias metrics are not robust, and it's important to correct these biases for masked language modeling as the applications are widespread (e.g., Google search, YouTube recommendations, autocomplete, etc.). It was also interesting that the work was tested across four Indo-European languages (English, Spanish, Portuguese, German), and I would be curious to see how the results turn out for Asian languages such as Mandarin or Korean—they may be significantly different due to cultural variations and the corpus that these masked language models are trained on.
Very complete project with tons of implicit extensibility that the authors are very aware of and solid basis for the method being applied.
Was easy to understand and novel
I think this project is good because the clearly organized experiment and model design. Although they did not get the desired result that will reflect a sigmoid curve, the future improvement of this project is also clear. In addition, AI + HPC as a research direction is very potential.
I selected this project because I thought their design and methods were very complex and interesting. I think takings risks should be rewarded and because I felt their topic and research was quite advanced and successful, I think they had some of the highest quality.
I picked this project because it provides a novel direction for the existing evaluation matrix. It can be used to improve speech to text into another level!
I believe this group was very sound because they also followed research methodologies soundly. They were able to create their own algorithms and work on a legitimate NLP task that others had not done. Their steps are clear to follow, and they are able to achieve results and move forward in an area that is untouched.
Explanation makes me understand well about their project
For this project, I think the model that created was very informative. They compared the Amazon with another product, and it did generate useful results. The paper is very novel as well.
I could clearly see that they have worked really hard to come this far. Their poster was the best one in the class!
I thought that this project was of high quality because the findings were conveyed in a clear, concise manner. I could easily understand why this topic is important and why their specific contributions with developing a new metric (other than the word error rate) is pivotal, because the big-picture was well-highlighted through both the presentation slides and poster. I also thought that the algorithm described was very innovative and I wonder what the next steps are from here for this group—one option might be working on multiparty speaker diarization, since their data is currently only focused on having two parties right now.
The purpose of the project was explained very well to me. I felt this is an interesting project.
I selected this project because the novelty of creating the TDER evaluation metric, along with implementing the Needleman-Wunsch algorithm and their own 3-d matrix was very unique. I also believe that it was the closest project to being fully completed especially since they achieved results and performances above the other metrics. Furthermore, I believe that it was also presented very concisely when compared to other presentations which also contributes to its rankings.
I selected this project because the model and algorithm this work used was really cool and different. I give special credit to him for doing as much, if not more, work than everyone else for just one person. Both his powerpoint and poster presentation had extremely high quality and delivered.
First of all, the author did the whole paper himself. This is very impressing. His model about assumption was impression. I was fascinated by his result.
I chose this group because their quality is good, given this is a single-person group.
The authors did great work. Their presentation was clean and clear and their poster was very well-made. Furthermore their work has the promise to give major contributions to the field of NLP's and language models. It also highlights interesting contrasts between the performance of the different forms of BERT and I'd love to see their data compared with high school seniors scores on the GER. Simply put, their project has the credibility of novelty and aiding the NLP field incredibly while also being interesting.
They have shown the most progress throughout the semester, and they had a clear, concise oral presentation.
I chose this group due to their research values. They created a dataset for sentence completion tasks, and they evaluated each popular pre-trained model on their dataset, not only giving the quality of their dataset but also evaluating those models' ability in sentence completion tasks.
I selected this project because I believe that their methodology is very sound and that with a larger data set, their methods can produce novel results. I also believe that their work could advance the NLP field (I am no expert though so take this with a grain of salt) with a substantial result because it could hint at the weaknesses of transformer models at performing tasks and thus perhaps hint at what exactly the transformer models capture in their predictions. I also believe that during the poster presentations, they did the best job explaining how their project worked.