PhD student. Python coder. Machine learning enthusiast. Former game designer. Data lover.
Jacky Casas
HEIA-FR
Bouvard de Pérolles 80
CP 32 – 1705 Fribourg
Switzerland
La société Deeplink est une startup d’intelligence artificielle spécialisée dans la technologie "chatbot". Cette entreprise utilise beaucoup le NLU (Natural Language Understanding) pour déterminer l’intention de l’utilisateur lorsqu’il communique avec le "chatbot". Actuellement la solution NLU est hébergée sur un serveur dédié et les échanges entre celle-ci et le client se font par requêtes http.
L’idée de ce projet consiste à faire tourner directement la solution NLU sur le navigateur du client. Avec une idée pareille, on assure qu’aucune requête contenant des données sensibles ne soit envoyée vers un serveur externe puisque l’inférence du modèle se fait sur le navigateur. En plus d’assurer une confidentialité bien plus sécurisée, si on déplace l’inférence du modèle on devrait obtenir de meilleure performance en termes de latence, d’extensibilité et on pourrait dans le futur développer des modèles personnaliser pour chaque personne.
A norovirus outbreak happened in France and was caused by raw shellfish and oysters. Switzerland, Sweden, Italy and the Netherlands have all also reported outbreaks linked to live oysters from France. Symptoms such as diarrhea, vomiting and incubation times, are consistent with norovirus or other enteric virus infections.
The number of people in France who have become ill after eating contaminated raw shellfish has jumped to more than 1,000. The outbreak has spurred international product recalls and many medias have talked about the Norovirus epidemic.
Social media services such as Twitter are valuable sources of information for surveillance systems. A digital syndromic surveillance system has several advantages including its ability to overcome the problem of time delay in traditional surveillance systems.
In this project, a Twitter-based data analysis system was developed to analyze the textual content of a set of tweets and see if there are any evidence of the Norovirus outbreak in France or Switzerland.
We create three chatbots; one benchmark bot and two chatbots applying different empathy enhancements. Additionally, we implement an emotion classifier that allows predicting the emotional state of text-based messages with impressive evaluation results.
More informationThis project lies in the context of a collaboration between the HumanTech Institute and Kare Knowledgeware. Kare’s product is an automated knowledge retrieval conversational tool. The goal of the collaboration is to enhance the customer experience by adding empathy into the system. Empathy is possible by following two steps, first understanding the user intention, and second answering him accordingly.
The main objective is to develop a tool that will allow a user to extract useful information about his query: intent (informational vs emotional), sentiment (happy, upset). A user query enters the system, and this same query exits the system with annotations.
To achieve this goal, a system must be developed. This system has to:
1. Extract the intent from the query
2. Extract the emotion from the query
The current solution meets the initial objective. The intent classification model reaches 85% accuracy. The deep learning model for the emotion detection reaches 80% accuracy and the more simple model using hand-crafted features for the emotion detection reaches 60% accuracy, both on a problem consisting of 4 emotions so 4 classes: joy, anger, fear, sadness. Kare Knowledgeware can try these models on their data and see how the model performs. Hopefully, the models will perform well on their data too and they can work on answering user queries with more empathy.
Social networks, and digital communication in general, have evolved at an impressive speed in recent years. They have enabled everyone to stay in constant contact with family members, co-workers or classmates. This technological progress has also brought with it a number of disadvantages, one of them being cyberbullying.
Cyberbullying, which is simply bullying that occurs on digital devices, is primarily directed at teens. In the past, this problem was more or less limited to school boundaries. But unfortunately, technology has removed these boundaries and so the bullying continues unabated, leaving no respite for the victims. The consequences are numerous and this phenomenon has already led to many suicides. It is therefore necessary to be able to detect cyberbullying on social networks and take action accordingly.
During this project, it soon became clear that the lack of existing resources, including datasets containing relatively recent cyberbullying texts, would complicate the task. Therefore, a slightly different approach has been adopted. Indeed, the objective has been changed. It was no longer a question of detecting cyberbullying, but rather of finding out whether or not a text containing insults was hateful.
To do so, around 4’000 tweets have been collected and labelled. From this dataset, different features have been extracted and different predictions, mainly based on random forest and neural networks models, have been realized. This process made it possible to identify the most useful features which were none other than the TFIDF values. Combining these features with a few others made it possible to reach an accuracy of 72.76%, a relatively low score for a binary classification problem.
It is possible that the model currently in use relies too much on the statistics of the various insults. For example, if one insult appears predominantly in positive samples rather than negative ones, the model will have difficulty in correctly predicting the samples containing this insult but whose class should be positive. Of course, the opposite is also true.
The issue of the structuration of textual data is very paramount in today’s technological reality. Textual data is a part of our daily life, we emit it, text it, tweet it, and receive it in huge bulks regularly, making the realm of data largely dependent on textual content that is highly unstructured by nature and unexploitable. In a world where Machine Learning, Big Data, and smart assistants are becoming the trend, companies are relying on these concepts to flourish their businesses. Therefore, the need for a platform permitting textual analytics techniques for all, becomes essential. This is where Wisely comes in handy. In a nutshell, Wisely provides two of the most used branches of Natural Language Processing: Named Entity Recognition and Natural Language Understanding. Using our platform, a non-technical user can import their own dataset, do the wanted treatments and export the results for future usages. This report has the intention of helping you get a better understanding of how Wisely works by giving you its implementation details from all the aspects.
Multilingual Appointment Chatbot is a project in collaboration with a Swiss startup called Deeplink specialized in chatbot technologies. For one of their customers, Deeplink requires a chatbot being able to detect if there is a time and a date in a text message sent by a human to the bot and to respond with a proper answer. This chatbot will be used in order to take appointment with customers.
The goal of the project is to compare several popular Natural Language Processing al- gorithms with text in French translated by a translation service. After testing those algorithms, a scoreboard will be made with specific criteria to find the best viable solu- tion.
After that, it is planned to create a Telegram bot in order to interact with the com- pany schedule and the customer.
Nombreux sont les bots qui lorsqu’ils ne comprennent pas l’utilisateur (la phrase, l’intention) répondent par la phrase bateau “Could you rephrase, I don’t understand”. Cela peut vite deve- nir ennuyant pour l’utilisateur.
C’est là qu’intervient l’idée folle du projet Movie Dialog Bot. Afin de garder l’utilisateur captivé, le bot doit lui répondre par un message divertissant. L’idée est justement d’afficher à l’utilisa- teur une célèbre citation de film avec en plus, les informations concernant l’acteur, le caractère et le nom du film. La citation n’est pas affichée au hasard ! C’est à l’aide du machine learning qu’on va définir la meilleure citation à afficher en fonction de la phrase envoyée par l’utilisateur.
Privacy is a widely publicized topic. Personal data are scattered everywhere, often in possession of big companies like Google and Facebook. Messaging applications are particularly affected as we communicate on a daily basis with our loved ones through these applications.
The main objective is to develop a tool that will allow an user to fetch the important data from his conversations, like the locations mentioned, the list of people with whom the person discusses the most, the emotions, the personality, etc.
The purpose of this project is to enhance the interaction between the user and HumanTech website by integrating a chatbot in HumanTech website that will be able to answer questions about the institute.
More informationAsbestos is a highly toxic silicate mineral that has been used widely in many products for its insulating, non-flammable and heat resistant properties. Exposition to high concentrations of asbestos may lead to chronic inflammation of the lungs and cancer. After the toxicity has become known, much effort was undertaken - and is up to this day - in removing asbestos from buildings, roofs and other materials used in industry and in the public. Asbestos detection is a manual, complex and time-intensive process, that requires an experienced expert in order to have consistent and correct results. In an attempt to reduce manual labor and increase consistency in detection, machine learning models have been recently adopted to automate the detection process.
More informationL’application AlertCenter, développée par l’institut HumanTech, a pour but d’analyser les postes publiés (tweets) sur le réseau social Twitter concernant les contaminations alimentaires en Suisse. Cependant, cette application rencontre certains problèmes quant à la localisation du tweet. C’est pourquoi le projet « The Swiss Way of Following » cherche a amélioré ce problème en créant une base de données qui regroupe tous les comptes Twitter suisses et ainsi à réduire le nombre de tweets non-localisés.
More informationLe but de ce projet est de déterminer la localisation d’un utilisateur, plus généralement de savoir s’il est suisse ou non, de Twitter via ses messages postés (tweets) en se basant uniquement sur le contenu des messages.
More informationIl s’agit d’un chatbot qui vise à aider les personnes qui souffrent d’un trouble de comportement alimentaire (TCA). L’application se chargera d’interroger d’utilisateur afin de sauvegarder les détails de chaque crise dans une base de données, de proposer des stratégies de prévention adéquates ainsi que de rester à l’écoute de l’utilisateur en cas de problème.
Dans les années 50’s, beaucoup de matériaux de construction utilisaient l’amiante qui est un minéral fibreux comme composante grâce à ces caractéristiques physiques. En effet, l’amiante possède des propriétés telles que la ténacité à de fortes températures(environ 1000 degrés Celsius), une grande résistance aux produits chimiques agressifs, une excellente isolation thermique et électrique, une bonne résistance à la traction ainsi qu’une grande élasticité. Elle était notamment utilisée pour les revêtements de sols, les crépis de façade et aussi pour les colles de carrelage auxquelles ce projet se basé.
Malheureusement, les fibres d’amiante peuvent se détacher des matériaux lors de vibrations qui se produisent lors d’une découpe, perçage ou autre travail sur divers matériaux. Ces fibres peuvent alors être inhalées par l’être humain et elles se logent dans les poumons pouvant provoquer diverses maladies parfois incurables telles que le cancer du poumon, le cancer du larynx ou encore celui de la plèvre qui est une membrane thoracique très mince. Les fibres d’amiante ont provoqué beaucoup de décès à la suite de son inhalation et son utilisation dans la construction a été interdite en 1990 en Suisse. Certains cantons comme Genève et Vaud disposent de réglementations cantonales quant à l’amiante pour la destruction ou transformation de bâtiments construits avant 1991. Ces travaux sont soumis à un diagnostic amiante réalisé par un laboratoire agréé comme le laboratoire Microscan Service basé à Chavannes-près-Renens qui a inspiré ce projet.
Information systems I (SI-I) Computer science students (2nd year of Bachelor degree)
(2017 - 2020) HEIA-FR
Information systems II (SI-II) Computer science students (3nd year of Bachelor degree)
(2018 - 2020) HEIA-FR
Machine learning (ML) Computer science students (3nd year of Bachelor degree)
(2017 - 2019) HEIA-FR
Software Engineering (GL-I) Computer science students (2nd year of Bachelor degree)
(2016 - 2019) HEIA-FR
Software Engineering (GL-T) Telecom students (2nd year of Bachelor degree)
(2016 - 2019) HEIA-FR
Web engineering (WebEng)
(2017 - 2017) University of Fribourg
Seminar Conversational agents & chatbots
(2017 - 2020) University of Fribourg
User Interface (UseInf)
(2018 - 2020) MSE (Master in Engineering, HES-SO)
Multimodal Processing, Recognition & Interaction (MPRI)
(2016 - 2020) MSE (Master in Engineering, HES-SO)
Want to be notified on new events, projects and publications of HumanTech?
Send us an email: elena.mugellini@hefr.ch, we will get back to you shortly.
You can also follow us on social medias: