AI Engineers of Melbourne: AI Hackathon

Banner image for the article

In August 2019 the AI Engineers of Melbourne meetup group kicked off a long form hackathon lasting until the presentation evening on 10 October 2019.

The hackathon was promoted as:

A long-form hackathon with networking events, off-site hacking and online mentoring! Use AI APIs and Libraries to create something useful and deployable.

The event was sponsored by Seek; space for the meet-and-greet night, mid-hackathon catch-up and the presentation evening was provided by Outcome.life; and special thanks need to be given to AI Engineers of Melbourne and especially Slava Razbash for organising the hackathon and associated events.

# My Involvement

Although not directly involved, I had a peripheral involvement in the AI Hackathon. As one of the organising mentors at Code Mentoring Melbourne we offered coaching and mentoring to participants in the hackathon; I also promoted the hackathon through my LinkedIn network.

I was only able to attend the final presentation evening; I was not involved in any of the teams, nor did I present an entry to the hackathon.

# Hackathon Structure

The hackathon began with a formal meet-and-greet evening on 8 August 2019. This was followed by a catch-up evening on 19 September and the final presentation on 10 October.

Throughout the hackathon a Slack group was available for group communications, expert mentorship and any queries relating to the hackathon. The Code Mentoring Melbourne meetup was also available to hackathon entrants on every Saturday.

The event was sponsored by Seek who provided expert mentors and a judging panel. A $500 prize plus the opportunity to present at the AI Upskill Conference in 2020 was awarded to the winners.

The judges for the event were:

# The Entrants

Throughout the hackathon a large number of people expressed interest, requested datasets and proposed ideas. Interest came from various locations, including locally in Melbourne, Brisbane, Sydney and even South Korea. A huge diversity of ideas were discussed during the hackathon, including language interpretation, blindness detection, animal identification and more.

# Presentations

Despite the amount of interest and the range of ideas, by the presentation evening only two teams felt they had a concept that was ready to be presented. This meant that the teams could present their concept in greater depth and could answer many questions; ultimately allowing the judges and attendees to gain a deeper understanding of the problem definition and the solution being presented.

# Presentation: Emotion CopyCat

Team Emotion CopyCat

The Emotion CopyCat team was composed of 7 people from a variety of backgrounds, including programming, visualisation, psychology and even robotics.

The team set out to help children with autism spectrum disorder (ASD) to regulate emotions; a problem observed by a number of people in the team through professional and personal relationships. The presentation began with a high-level overview of ASD and how it can manifest in different individuals. The highlighted aspects of ASD were a lack of social awareness, speech difficulties and repetitive behaviour; the team chose to concentrate on the social awareness in their product.

The technical solution created by the team had several proof-of-concept implementations, and many more ideas on a backlog for future development. The solution was implemented with Google Firebase for data storage; a tagged dataset from Affectnet; and code developed in both Python and JavaScript.

The primary functionality displayed an image of a face and asked the user to select the appropriate emotion; if the user makes an incorrect selection then hints are provided through the use of emojis; and if another incorrect selection is made then an additional hint is provided through additional audio, video or text.

A secondary function was implemented to help users to mimic an expression. The expression is displayed as a static image of a person and the device camera is used to monitor the user until they mimic the expression for a configurable amount of time.

The tertiary function was created to assist users to understand and enact the appropriate level of eye contact. This would be done initially through the use of static images, but over time would be expanded to having interactive sessions to enable real-time learning.

After the presentation the team was asked a number of questions. These related to the accuracy of existing datasets (something that could be improved with more manual verification), evidence of the concept working (which was backed-up by published articles and studies), technical implementation, and what’s next for the team.

The team is currently validating their concept with industry professionals and is looking for philanthropic investment to progress the concept to production.

# Presentation: Hirend

Team Hirend

The Hirend team was comprised of three people, all from a technical background.

The team identified a problem with how little time was spent by recruiters when reviewing resumes; they also identified the drop-off rate of applicants due to long or repetitive application submission processes; and the use of old ATS software by recruitment organisations.

The Hirend team worked to resolve these issues through the use of a chat bot. The concept involved applicants uploading their resume which is then scanned and interpreted. Based on the interpreted data and the job description provided by the recruiter a set of questions are asked to validate the resume and to gain a deeper understanding of the applicant. This information is then presented in a dashboard format for the recruiter to review prior to talking to the applicant.

A chat bot was chosen as the preferred medium as it reduces the formality of a questionnaire and can reduce the bias inherent in human nature.

By offering the solution as software-as-a-service, learnings from a greater number of recruiters and applicants can be applied to the system; companies do not need to constantly upgrade their software as the improvements can be applied globally; and the system can be integrated with existing web interfaces and applications.

At the time of presentation, a number of aspects of the system were hard-coded; in a production environment these would need to be fully implemented. The system is designed for a recruiter or hiring manager to upload a job advertisement; this will be interpreted to populate the job specifications and additional questions will be asked to ensure the job is fully specified. When an applicant applies for a role their resume will be submitted to the system and will trigger the chat bot’s interactions. The resume will be interpreted to answer questions relative to the role; the chat bot will then ask a number of questions to verify data in the resume and to populate requirements that were not covered by the resume. This data is formulated into a report to assist the recruiter to locate the best applicants for a role.

After the presentation most of the questions asked about the product were related to the implementation and use of the tool; these included queries about the question definition (at the moment the questions are limited, but ultimately the recruiter would be able to define custom questions); handling of poorly formatted resumes (this is covered by the use of keyword scanning and limited sentence interpretation); the assessment of soft-skills (this would need further investigation); the chat medium; and the potential social impacts of the tool.

The team was asked about the next steps for the software; although the proof of concept was working the team did not seem to think that it could be commercialised in the near future without some external influence.

# Presentation Observations

During the presentations I observed the importance of both public speaking ability and presentation quality. Although both teams believed in their product, some speakers were significantly better at relaying this than others; additionally, the quality of the presentation influenced the audience’s interest in the presentation from the outset. For me, this highlighted the importance of knowing how to present, and having confidence in your own abilities; having said this, it was amazing to see these teams giving it their best shot and seeing their confidence improve, especially during the Q&A sessions when their confidence really started to shine through.

I’d also like to commend the judges for looking beyond the presentation and team’s public speaking confidence and assessing the functionality, social benefits and commercial potential of each concept.

# So, Who Won?

Writing this article without stating who won would be quite pointless. I completely agree with the judges’ decision and believe the best concept won the hackathon.

I’d like to take this opportunity to congratulate Emotion CopyCat on their win. I think that the social benefits of the concept are outstanding and the potential for future improvements and the impact of this app mean this will become a very valuable tool.

# Emotion CopyCat