Swipe iX Innovation Hackathon

June 11, 2020

Swipe iX Innovation Hackathon

Digital transformation is not just about developing new products, processes or tools, it’s about redefining what efficiency means for your business. In an era of ever rapidly increasing competition brought about by the exponential adoption of cloud solutions and AI, companies need to innovate now more than ever in order to maintain that critical edge.

To keep up with the ever-shifting digital landscape brought about by now entirely new categories of technology, Swipe iX has made it a core focus to explore these frontiers. In order to truly embrace this universe of opportunities, we realised as a business that we need to tap into and unlock the collective creativity of our team


In April Swipe iX hosted its first quarterly Hackathon event for 2019. With a focus on cloud services, the team were given specific challenges and tasked to solve these using AWS tools and services. Swipe is a proud AWS Partner and therefore constantly seeks to find new ways to innovate with their technology.

The Format

Five teams were selected, each with a team lead, backend developer, frontend developer, product owner and a consultant. Over the course of a single day, teams were required to conceptualise, develop and present a functioning prototype to illustrate their chosen solution. Each step of the process was to be well documented in Confluence and the problem, goals and key use-case for the solution described.

The Challenges

Teams were requested to select from one of five challenges relating to the implementation of core cloud technologies provided to produce and prove the principal use case of their solutions.


Challenge 1: You code, Alexa comes to life. Create new Skills for Alexa, the brain behind Amazon Echo.

Build a new voice-activated Alexa skill that makes life more enjoyable, organized, and/or convenient. No hardware is required! You don't need an Amazon Echo to participate. Test your skill using the emulator on the developer console on the skill itself.

Challenge 2: Build intelligent applications with out of the box pre-trained language and computer vision services!

Challenge 3: Build machine learning projects using AWS DeepLens, the world’s first deep learning enabled video camera for developers

  • Create a working software application that uses and runs on the AWS DeepLens device.
  • Use AWS DeepLens’s sample projects to get started or build your own custom model from scratch.

Challenge 4: Easily build scalable bots for Slack using AWS Lambda – all without needing to provision and manage servers.

Be a working bot for Slack that runs on AWS Lambda.

  • Bots must use AWS Lambda and Amazon API Gateway.
  • Integrate Slack APIs, such as the Events API
  • Showcase natural language processing of chat conversations by using open source NLP libraries

Challenge 5: Build cool IoT apps that put sensor data to work using AWS IoT, Amazon Simple Notification Service and Amazon Simple Queue Service!

  • Build an app that collects and processes sensor data to solve a particular user need, ranging from ordering fresh groceries or replacing printer cartridges to automating manufacturing plants and assembly lines.
  • The app must use AWS IoT, or Amazon Simple Notification Service and Amazon Simple Queue Service, or both combined to collect and process the data, and take actions in real-time.
  • Use other AWS services.
  • Use HTTP protocol to directly send data to Kinesis or use MQTT protocol to send the data to AWS IoT and then trigger a PubSub notification to AWS SNS.
  • Bring your own sensor data or generate sample data using the Amazon Kinesis Data Generator and AWS IoT Simulator.
  • Leverage third-party APIs, SDKs, and services.

The Result

After an intensive five-hour adrenaline and sugar-fueled sprint, the final solutions presented by each of the five teams were nothing short of inspired. What follows is a brief synopsis provided by each team on the chosen problem and their solution.



The Solution

Google is the gatekeeper of the Internet, but even as advanced as the Search Giant’s algorithms continue to be, it still requires humans to tell it what our websites are doing, saying and what their purpose is. Oftentimes, that means developers, content creators and editors spend more time than practically feasible in creating content purely for the purposes of satisfying Google to ensure their work is searchable and users are able to find it.

One of those tedious tasks is creating metadata for images. Without hyper-relevant, detailed tags on images, even the best worded searches will return irrelevant results and potentially drive users to the wrong destinations, or return the wrong images to the user on a search.

To help content creators better contextualise their uploads and ensure more accurate results, we built a plug and play CMS widget that makes use of Amazon Rekognition. The widget allows editors and content creators to upload images to their CMS and add the appropriate tags to the image at the click of a button. This tool will serve to drastically reduce the time and effort of adding tags to images and help create online search results that are relevant and correct, more often.


How we did it

The CMS widget leverages the power of Amazon Rekognition to make finding the right tag for your image a snap.

As an image gets uploaded, Rekognition uses deep learning technology developed by Amazon’s computer vision scientists to automatically analyse the objects, people, text, scenes, and activities and instantly returns a list of recommended tags based on what it sees in that image. The editor or content creator can then verify the tags returned as valid and correct, and flag any as incorrect in this instance.

The CMS makes a call to the Lambda with the uploaded images details, which returns a list of identified labels which the user can accept or reject. Accepting the label then saves it as metadata to the image.

Even in rejecting the labels recommended the action helps to train the system for better tagging in future.


The Solution

We’ve all sat in front of the TV endlessly browsing through the myriad of movie options on our favorite video streaming services or even stood at the cinema staring at the plethora of movie posters feeling indecisive. Analysis paralysis is a real thing , but now Rateflix has you covered.

In developing our concept we asked the simple question of how to eliminate the guesswork in making the right choice. Accessing the scores of reviews and abundance of information available on each film might just be a Google search away, but what if you could tap into just the right information by simply looking at the poster itself?

This is why our team decided to create an app that is able to recognise a movie poster and the text on it and instantly redirect you to the IMDB page of that film giving you the ratings, reviews, trailers and other pertinent information.


How we did it

Creating a hybrid application as a foundation using the Ionic Framework, we were able to spin up a native Android app with a camera viewfinder that allows users to scan a poster in milliseconds.  By pointing the device at the poster it creates a snapshot and converts it to base64 to avoid having to struggle with photo uploads or even storage of the photos, and makes a direct call to the Amazon Rekognition API.


After some quick cloud magic, Amazon Rekognition’s API then returns a JavaScript object with data resulting from the image analysis. One of the advantages we had with the Image Recognition API was that it returns a set of web URLs related to the analysed image which we were able to scan for IMDB links and present the correct page to the user.

In further iterations we hope to extract the most relevant information such as review scores, crew and cast details and display it as an AR overlay directly on the poster itself un real-time.


The Solution

If you've ever found yourself in a situation where you're not able to read through incoming slack messages - driving in your car, out for a run, etc. - but you don't want to miss out on the conversation, Vocable will help keep you in the loop. Notifications of incoming messages will have the option to "read" the content to you at the touch of a button.

You can even respond with a spoken input that will automatically transcribe messages back to the channel of origin for hands-free conversation - even when you don’t speak the same language as the other person!



How we did it

We built a hybrid app that would monitor your slack feed on specific channels and provide an interface with quick actions to interact with as if you were having a conversation.

When a new message is received, the application calls an AWS lambda endpoint to pass the content for analysis. The messages are then translated using AWS Translate and the translated contents is converted to an audio version of that message using a service AWS Polly. The resulting audio file is then played back to you.



At this point you can reply to the message and the process is reversed. Your voice command is recorded and transcribed using the AWS Transcribe and the message is again translated and sent to Slack in text format.

In this way we are able to create a basic version of Douglas Adams’ Babel fish that enables you to have a conversation with anyone in any language in near real-time regardless of the means of input.



The Solution

Standing in line is never fun. Especially if there always seems to never be nearly enough open till points compared to the ever increasingly annoyed customers drumming up to have their purchases checked out. Team D wanted to help retailers solve this problem by creating a system that monitors queue length in a store and monitors people’s behaviour to detect their emotional state.

Should the length of the queue get longer, the manager will be notified that he/she should increase the number of cashiers. The interface displays the store details, queue lengths over time and user sentiment over time as queue lengths expand, and through this data help store managers draw patterns and suggest busy times and optimal cashier rotation schedules based on activity. Consumer metrics presented in this fashion also allows business owners to make decisions in real-time that will lead to an increase in customer satisfaction.

Happier customers are more likely to return and won't be scared into shopping only certain times of the month/week/day due to unbearable queue lengths and slow service.



How we did it

Through the use of any standard camera, Images of a queue were uploaded to an AWS S3 Bucket that triggered an AWS Lambda process. That process then fired off a request to AWS Rekognition to parse the images using two separate processes. One to count all the objects in the image and the other to determine emotion in the faces it could recognise. The data returned was then placed in an outbound S3 bucket for retrieval.

By building a simple dashboard, using Laravel and Angular, to process the data from AWS Rekognition and sequence them in time order, the team was  able to use multiple time-series and radar graphs to highlight queue length, general customer satisfaction, gender and age.

To ensure that the library being used can be made universally applicable (in our case it was the Rekognition Node.js lib) all the functionality was thoroughly documented according to the release version so that we were able to quickly use another language and library to replace any step in the process.


The Solution

In all industries, no matter how big or small, digital or not, capturing meeting notes and reminders is still cumbersome manual process. Traditionally done by either catering these notes on a device like a mobile phone or laptop, written down on paper or voice recorded and transcribed later, notes are often missed or lost. To solve this problem Team E developed two automated digital solutions:

The first is an Alexa Skill for Note Capture that enables attendees in a meeting to talk to the application and in turn, the application automatically transcribes the speech to text and saves the notes to a Confluence Document.

The second solution is an Alexa Skill for Reminders that uses verbal commands when in a meeting to send reminders within Slack to the Slack members that need to complete a task or action. The member is then reminded of the action and can update the status of the action once it has been completed. This, in turn, updates the sender on the action taken.



How we did it

Using the Amazon Alexa services allowed us to develop Skills that can run on the physical device and capture any conversation in a chatbot type manner. From the audio recorded, text was automatically transcribed and this data sent to a custom Slack integration via auth tokens and their various APIs.

This was done using AWS Lambdas written in NodeJS for transforming the data before POSTing it to the desired endpoint/s in a format of our choosing. The team also made use of CloudWatch to capture logs and track data and interactions from users.

We hope that by utilizing these Alexa Skills we can minimize the effort involved in physically capturing these meeting notes and allow attendees within these meetings to actually concentrate on what is being said and add real value.


Key Learnings

We asked the teams what some of the key lessons were that they gained from the day. For many it reaffirmed just how much can be achieved in a single day with proper upfront planning, coordination and multi-functional teams focussed on clear and unified goal.

However even with the best laid plans, it’s not always easy to build a working prototype or even a simple interface in the space of a few hours. Luckily the AWS toolset is incredibly robust, well documented and easy to access and make use of when developing your own custom solutions. Building on an idea that already had the crucial components in place helped to significantly cut down on development needed to realize a fully functioning solution.

The key learning resoundingly  appeared to be the learning experience itself. “There are so many assumptions that are challenged, unknowns that emerge and unexpected hurdles to clear that no amount of discussion or upfront analysis would have prepared us for.” one team lead said. “The best ideas and refinement of our initial thinking came from just jumping in and seeing what works.”

Conclusion

It’s no exaggeration to say that we were completely blown away by the ingenuity and creativity shown by each of the teams in building out their products. To see fully functional prototypes materialize in the space of a few hours that are actually able to prove a tangible solution to real-world problems was as inspiring as it was gratifying.  

We couldn’t be more excited about what the future holds with the incredible technology that our extraordinary team of engineers, product and design experts have at our disposal for building meaningful and impactful solutions and can’t wait for the next Swipe iX Hackathon in Q2.

If you would like to find out how your business can utilise the power of cloud technology  or are interested in sponsoring Swipe iX's next Hackathon event and submitting your own unique challenges for our team's to tackle, then why not drop us a line on info@swipeix.com.

Jacques Fourie

Hendri Lategan

COO

Swipe iX Newsletter

Subscribe to our email newsletter for useful tips and valuable resources, sent out monthly.