image
image
user-login
Patent search/

A COMMUNICATION AID FACILITATING COMMUNICATION ASSISTANCE FOR USERS WITH SPEECH IMPAIRMENT AND METHOD THEREOF

search

Patent Search in India

  • tick

    Extensive patent search conducted by a registered patent agent

  • tick

    Patent search done by experts in under 48hrs

₹999

₹399

Talk to expert

A COMMUNICATION AID FACILITATING COMMUNICATION ASSISTANCE FOR USERS WITH SPEECH IMPAIRMENT AND METHOD THEREOF

ORDINARY APPLICATION

Published

date

Filed on 9 November 2024

Abstract

Present disclosure of a communication aid (100) facilitating communication assistance for users with speech impairment and method thereof. The communication aid includes control unit (106) coupled to one or more functional components. The control unit (106) receives one or more key impressions generated by the user (120) with speech impairment condition employing the keypad (102) to identify a mode to operate the communication aid. The control unit (106) detects the key impressions to retrieve a specific sentence corresponding to the key impressions to display a communication sentence the user (120) desired to convey to a concerned person (122) if the mode identified is mode 1. Furthermore, the control unit (106) captures voice of the user and process the voice captured to detect an error in a slurred speech of the user and generate a voice signal by correcting the error in the slurred speech if the mode identified is mode 2.

Patent Information

Application ID202441086516
Invention FieldELECTRONICS
Date of Application09/11/2024
Publication Number46/2024

Inventors

NameAddressCountryNationality
R. MENAKAProfessor, Centre for Cyber Physical Systems, Vellore Institute of Technology, Chennai, Vandalur - Kelambakkam Road, Chennai, Tamil Nadu - 600127, India.IndiaIndia
R. KARTHIKAssociate Professor, Centre for Cyber Physical Systems, Vellore Institute of Technology, Chennai, Vandalur - Kelambakkam Road, Chennai, Tamil Nadu - 600127, India.IndiaIndia
R. SIVAKUMARProfessor, Faculty of Physiotherapy, Sri Ramachandra Faculty of Physiotherapy, Chennai, Tamil Nadu - 600116, India.IndiaIndia
SAHNAAZ MARIAMUG Student, School of Electronics Engineering, Vellore Institute of Technology, Chennai, Vandalur - Kelambakkam Road, Chennai, Tamil Nadu - 600127, India.IndiaIndia
LAKSHMANANUG Student, School of Electronics Engineering, Vellore Institute of Technology, Chennai, Vandalur - Kelambakkam Road, Chennai, Tamil Nadu - 600127, India.IndiaIndia

Applicants

NameAddressCountryNationality
VELLORE INSTITUTE OF TECHNOLOGY, CHENNAIVandalur - Kelambakkam Road, Chennai, Tamil Nadu - 600127, India.IndiaIndia

Specification

Description:TECHNICAL FIELD
[0001] The present disclosure relates to the field of assistive devices. In particular, it relates to a communication aid facilitating communication assistance for users with speech impairment and the method thereof.

BACKGROUND
[0002] Background description includes information that may be useful in understanding the present disclosure. It is not an admission that any of the information provided herein is prior art or relevant to the presently claimed disclosure, or that any publication specifically or implicitly referenced is prior art.
[0003] Assistive technology is a valuable resource for persons who struggle to communicate due to speech or hearing loss. All services, programs, or goods that help the elderly or those with disabilities to lead a better life are considered forms of assistive technology. Promoting effective communication for those with communication problems requires easily available assistive technologies. A major stake in the effectiveness of communication and reciprocation of the needs of the speech impaired people depends on the caretakers' understanding of the patient. Mostly, people with the speech impairment employ a written chart to indicate sentences to one another. Besides, people with speech impairments communicate through writing, drawings, gestures, and facial expressions. Electronic communication devices such as Augmentative and Alternative Communication (AAC) Devices or speech-generating devices are also available in the market. One of the existing prior arts authored by van de Sandt-Koenderman, W. M. E. et. al., entitled "A computerised communication aid in severe aphasia: An exploratory study", discloses a handheld computerised communication aid for aphasia called TouchSpeak (TS), and investigates the efficacy of TS. One of the existing US patent publications US9424842B2 entitled "Speech recognition system including an image capturing device and oral cavity tongue detecting device, speech recognition device, and method for speech recognition", discloses a speech recognition system is to be used on a human subject that include an image capturing device, an oral cavity detecting device and a speech recognition device.
[0004] The existing electronic communication devices are highly expensive with separate device charges and hardware installation charges. Currently, there is no customized gadget that allows users to converse in their native language. Hence, a need exists in the art to provide a cost-effective and customizable assistive device that can address the challenges faced by speech-impaired people.

OBJECTS OF THE PRESENT DISCLOSURE
[0005] Some of the objects of the present disclosure, which at least one embodiment herein satisfies are as listed herein below.
[0006] It is an object of the present disclosure to provide a communication aid facilitating communication assistance for users with speech impairment and method thereof.
[0007] It is another object of the present disclosure to provide a communication aid and method which provides communication assistance for people suffering from communication disorders by employing a unique and customisable embedded board through multilingual support in English, regional languages, Braille, and images.
[0008] It is another object of the present disclosure to provide a communication aid and method which operates in two distinct modes, tailored to the needs of the patients with speech imperfections.
[0009] It is another object of the present disclosure to provide a communication aid and method which uses a speech signal processing method for slurred speech detection by analysing pitch, formants, and intensity through rule-based phoneme detection.

SUMMARY
[00010] The present disclosure relates to the field of assistive devices. In particular, it relates to a communication aid facilitating communication assistance for users with speech impairment and the method thereof.
[00011] An aspect of the present disclosure pertains to a communication aid facilitating communication assistance for users with speech impairment. The communication aid includes at least one control unit coupled to one or more functional components. The one or more functional components include any or a combination of a keypad, an audio recording unit, a display unit, and a voice generation unit. The at least one control unit is configured to receive one or more key impressions generated by at least one user with a speech impairment condition employing one or more keys on the keypad to identify a mode to operate the communication aid. The mode includes at least one of a mode 1 and a mode 2. Further, the at least one control unit can be configured to detect the one or more key impressions to retrieve a specific sentence corresponding to the one or more key impressions to display a communication sentence that at least one user desires to convey to a concerned person on the display unit if the mode identified is the mode 1. Furthermore, the at least one control unit can be configured to capture a voice of the at least one user employing the audio recording unit, and process the voice captured to detect an error in a slurred speech of the at least one user and generate a voice signal employing the voice generation unit by correcting the error in the slurred speech if the mode identified is the mode 2. The voice signal is generated corresponding to a standard speech formed from the slurred speech by correcting the error.
[00012] In an aspect, the communication aid is configured to allow the at least one user to communicate with the concerned person by generating the one or more key impressions and displaying the communication sentence on the display unit together with the emission of the voice signal corresponding to the communication by the voice generation unit in a preferred language of the at least one user in mode 1. Further, the communication aid is configured to provide the standard speech by converting the slurred speech of the at least one user in mode 2 by extracting one or more features from the slurred speech and processing the slurred speech employing a combination of a signal processing technique and a pre-defined speech guidelines based on the one or more features extracted.
[00013] In an aspect, the mode is selected by the at least one user based on the speech impairment condition of the at least one user. The speech impairment condition of the at least one user includes at least one of an impairment affecting the ability to comprehend sentences, and an impairment generating the slurred speech. The mode 1 is employed by the at least one user with the impairment affecting the ability to comprehend sentences and the mode 2 is employed by the at least one user with the impairment generating the slurred speech. The impairment affecting the ability to comprehend sentences includes at least one of an Aphasia, and a Dysphasia, and the impairment generating the slurred speech includes at least one of a Dysarthria, a tongue-tie, a nasal speech, and a cleft palate speech.
[00014] In an aspect, the at least one control unit is configured to scan continuously a keypad matrix to identify the one or more key impressions generated by the at least one user. Further, the at least one control unit is configured to perform a mapping between each of the one or more key impressions with the specific sentence together with an audio file pre-stored in the memory to generate the communication sentence for display. The communication sentence is displayed on the display unit to enable the concerned person to understand an idea the at least one user desired to convey. The communication sentence is converted to an audible speech and the voice signal corresponding to the audible speech is emitted out through the voice generation unit to enable the concerned person to understand the idea the at least one user desires to convey.
[00015] In an aspect, the at least one control unit is configured to identify one or more phonemes in the slurred speech based on the one or more features extracted by employing a rule-based phoneme detection technique. The one or more features include of at least one of a pitch, a formant and an intensity. Further, the at least one control unit is configured to detect the correction required in the slurred speech by performing the mapping between the one or more phonemes identified with a correct speech pattern employing a phonetic analysis. Further, the at least one control unit is configured to refine an articulation of the slurred speech by employing a formant shifting techniques once completing the phonetic analysis of the slurred speech. Furthermore, the at least one control unit is configured to align the one or more phonemes in the slurred speech with the standard speech pattern by adjusting the timing of the one or more phonemes within the slurred speech employing a time normalisation technique to generate a standard speech.
[00016] In an aspect, the keypad includes one or more phrases on the one or more keys that are used by the at least one user curated based on a plurality of communication charts issued by a healthcare facility. The one or more phrases are mentioned in the keypad in one or more ways. The one or more ways include any or a combination of one or more communication languages and an image. The one or more communication languages include any or a combination of an English, a regional language of choice, and a Braille.
[00017] In an aspect, the communication aid is configured to employ a text-to-speech conversion technique to generate the voice signal from the communication sentence in the mode 1 and the standard speech in the mode 2.
[00018] In an aspect, the communication aid is a portable device and customizable with one or more easy to understand images.
[00019] In an aspect, a method for facilitating communication assistance for users with speech impairment by a communication aid. The method includes steps of initialising, the communication aid by providing power employing a power supply. Further, the method includes steps of receiving, by a control unit in the communication aid one or more key impressions generated by at least one user with a speech impairment condition employing a keypad in the communication aid to identify a mode to operate the communication aid. The mode includes at least one of a mode 1 and a mode 2. Further, the method includes the step of detecting, by the control unit the one or more key impressions to retrieve a specific sentence corresponding to the one or more key impressions to display a communication sentence the at least one user desired to convey to a concerned person on a display unit in the communication aid if the mode identified is the mode 1. Furthermore, the method includes the step of capturing, by the control unit, a voice of the at least one user employing an audio recording unit in the communication aid, and processing the voice captured to detect an error in a slurred speech of the at least one user and generate a voice signal employing the voice generation unit by correcting the error in the slurred speech if the mode identified is the mode 2. The voice signal is generated corresponding to a standard speech formed from the slurred speech by correcting the error.
[00020] Various objects, features, aspects, and advantages of the inventive subject matter will become more apparent from the following detailed description of preferred embodiments, along with the accompanying drawing figures in which like numerals represent like components.

BRIEF DESCRIPTION OF DRAWINGS
[00021] The accompanying drawings are included to provide a further understanding of the present disclosure, and are incorporated in, and constitute a part of this specification. The drawings illustrate exemplary embodiments of the present disclosure, and together with the description, serve to explain the principles of the present disclosure.
[00022] In the figures, similar components, and/or features may have the same reference label. Further, various components of the same type may be distinguished by following the reference label with a second label that distinguishes among the similar components. If only the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label.
[00023] FIG. 1 illustrates exemplary block diagram of the proposed communication aid 100 facilitating communication assistance for users with speech impairment, in accordance with an embodiment of the present disclosure.
[00024] FIG. 2A illustrates an exemplary circuit diagram 200 the proposed communication aid 100 facilitating communication assistance for users with speech impairment, in accordance with an embodiment of the present disclosure.
[00025] FIG. 2B illustrates an exemplary view of the keypad employed in the proposed communication aid 100 facilitating communication assistance for users with speech impairment, in accordance with an embodiment of the present disclosure.
[00026] FIG. 3 illustrates an exemplary flow diagram 300 of the method for facilitating communication assistance for users with speech impairment by a communication aid 100, in accordance with an embodiment of the present disclosure.
[00027] FIG. 4 illustrates an exemplary process flow diagram 400 of the proposed communication aid 100, in accordance with an embodiment of the present disclosure.

DETAILED DESCRIPTION
[00028] The following is a detailed description of embodiments of the disclosure depicted in the accompanying drawings. The embodiments are in such detail as to clearly communicate the disclosure. However, the amount of detail offered is not intended to limit the anticipated variations of embodiments; on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present disclosure as defined by the appended claims.
[00029] Embodiment of the present disclosure relates to a communication aid facilitating communication assistance for users with speech impairment and the method thereof. In specific, it relates to a communication aid and method which provides communication assistance for people suffering from communication disorders by employing a unique and customisable embedded board through multilingual support in English, regional languages, Braille, and images.
[00030] Various aspects of the present disclosure are described with respect to FIG 1-4.
[00031] FIG. 1 illustrates exemplary block diagram of the proposed communication aid 100 facilitating communication assistance for users with speech impairment, in accordance with an embodiment of the present disclosure.
[00032] In an embodiment, referring to FIG. 1, the exemplary block diagram of the proposed communication aid 100 facilitates communication assistance for users with speech impairment. The communication aid 100 may include at least one control unit 106 coupled to one or more functional components. The one or more functional components can include, but not limited to: a keypad 102, an audio recording unit 108, a display unit 110, a voice generation unit 112, a communication module, and the like. The control unit 106 can be configured to receive one or more key impressions generated by at least one user 120 with a speech impairment condition employing one or more keys 104-1, 104-2, …, 104-N on the keypad 102 to identify a mode to operate the communication aid 100. The mode can include, but not limited to: a mode 1, a mode 2, and the like.
[00033] In an embodiment, the mode is selected by the user 120 based on the speech impairment condition of the user 120. The speech impairment condition of the user 120 can include, but not limited to: an impairment affecting the ability to comprehend sentences, an impairment generating the slurred speech, and the like. The mode 1 may be employed by the user 120 with the impairment affecting the ability to comprehend sentences and the mode 2 may be employed by the user 120 with the impairment generating the slurred speech. The impairment affecting the ability to comprehend sentences can include, but not limited to: an Aphasia, a Dysphasia, and the like. The impairment generating the slurred speech can include, but not limited to: a Dysarthria, a tongue-tie, a nasal speech, a cleft palate speech, and the like.
[00034] In an exemplary embodiment, the communication aid 100 can act as a communication aid 100 for the user 120 with the impairment affecting the ability to comprehend sentences in mode 1 and as a communication aid 100 for the user with the impairment generating the slurred speech in mode 2. For an instance, the communication aid 100 can act as a communication aid for Aphasia in mode 1. In another instance, the communication aid 100 can act as a communication aid 100 for Dysarthria in mode 2.
[00035] In an embodiment, the control unit 106 can be configured to detect the one or more key impressions to retrieve a specific sentence corresponding to the one or more key impressions to display a communication sentence the user 120 desired to convey to a concerned person 122 on the display unit 110 if the mode identified is the mode 1. The concerned person can include, but not limited to: a parent, a family member, a doctor, a nurse, a caretaker, a therapist, and the like. Furthermore, the control unit 106 can be configured to capture a voice of the user 120 employing the audio recording unit 108, and process the voice captured to detect an error in a slurred speech of the user 120 and generate a voice signal employing the voice generation unit 112 by correcting the error in the slurred speech if the mode identified is the mode 2. The voice signal is generated corresponding to a standard speech formed from the slurred speech by correcting the error.
[00036] In an embodiment, the control unit 106 can be configured to scan continuously a keypad matrix to identify the one or more key impressions generated by the user 120. Further, the control unit 106 can be configured to perform a mapping between each of the one or more key impressions with the specific sentence together with an audio file pre-stored in the memory to generate the communication sentence for display. The communication sentence is displayed on the display unit 110 to enable the concerned person 122 to understand an idea the user 120 desired to convey. The communication sentence is converted to an audible speech and the voice signal corresponding to the audible speech is emitted out through the voice generation unit 112 to enable the concerned person 122 to understand the idea the user 120 desired to convey.
[00037] In an embodiment, the control unit 106 can be configured to identify one or more phonemes in the slurred speech based on the one or more features extracted by employing a rule-based phoneme detection technique. The one or more features include of at least one of a pitch, a formant and an intensity. Further, the control unit 106 can be configured to detect the correction required in the slurred speech by performing the mapping between the one or more phonemes identified with a correct speech pattern employing a phonetic analysis. Further, the control unit 106 can be configured to refine an articulation of the slurred speech by employing a formant shifting techniques once completing the phonetic analysis of the slurred speech. Furthermore, the control unit 106 can be configured to align the one or more phonemes in the slurred speech with the standard speech pattern by adjusting the timing of the one or more phonemes within the slurred speech employing a time normalisation technique to generate a standard speech.
[00038] In an exemplary embodiment, Pitch analysis focuses on determining the fundamental frequency, providing insight into the intonation and emotional tone of the speech. Formant extraction identifies the resonant frequencies in the vocal tract that define vowel sounds, which are essential for distinguishing different phonemes. The formants are typically identified using Linear Predictive Coding (LPC) analysis, which estimates the vocal tract shape and formant frequencies. Intensity measurement evaluates the loudness of the speech, offering another important dimension in assessing speech clarity and articulation. It is typically measured in decibels (dB) and can be derived using the root mean square (RMS) of the signal.
[00039] In an embodiment, the communication aid 100 can be configured to allow the user 120 to communicate with the concerned person 122 by generating the one or more key impressions and displaying the communication sentence on the display unit 110 together with the emission of the voice signal corresponding to the communication by the voice generation unit 112 in a preferred language of the user 120 in mode 1. Further, the communication aid 100 can be configured to provide the standard speech by converting the slurred speech of the user 120 in mode 2 by extracting one or more features from the slurred speech and processing the slurred speech employing a combination of a signal processing technique and a pre-defined speech guidelines based on the one or more features extracted. The communication aid 100 can be configured to employ a text-to-speech conversion technique to generate the voice signal from the communication sentence in the mode 1 and the standard speech in the mode 2.
[00040] In an embodiment, the communication aid 100 can be a portable device that can operate in user-selected modes including the mode 1 and the mode 2. Further, the device can be customizable with one or more easy to understand images.
[00041] In an exemplary embodiment, during the operation of the communication aid 100 100 in mode 1, the user can communicate with the concerned person 122 by pressing the appropriate key on the keypad 102 and the corresponding sentence may be displayed on the screen and simultaneously played out loud by the speaker in the preferred language of the user either in a male voice or in a female voice based on the user preference. The communication module 116 aid for communication of the user 120 to the concerned person 122 through a computing device 118 associated with the concerned person 122. The computing device can include, but not limited to: a smartphone, a mobile phone, a tablet, and the like. For an instance, the sentence displayed on the communication aid 100 can be transferred to the computing device 118 of the concerned person 122 through the Bluetooth module 116 for easy communication. In another instance, the sentence the user 120 desired to convey to the concerned person 122 may be transferred to a smartphone associated with the concerned person 122 employing the Bluetooth module 116 in the communication aid 100. During the operation of the communication aid 100 in mode 2, once the user presses the key 104 to record speech, the voice of the user 120 is captured and errors in the speech of the user 120 may be detected and corrected by employing the signal processing and pre-defined error detection rules.
[00042] Although FIG. 1 shows exemplary components of the communication aid 100, in other embodiments, the network architecture 100 may include fewer components, different components, differently arranged components, or additional functional components than depicted in FIG. 1. Additionally, or alternatively, one or more components of the communication aid 100 may perform functions described as being performed by one or more other components of the communication aid 100.
[00043] FIG. 2A illustrates an exemplary circuit diagram 200 the proposed communication aid 100 facilitating communication assistance for users with speech impairment, in accordance with an embodiment of the present disclosure.
[00044] In an exemplary embodiment, referring to FIG. 2A, the circuit diagram 102 of the communication aid 100 may include a microcontroller 202 coupled to an electronic keypad 204, a Liquid Crystal Display (LCD) screen 206, a speaker 210, a microphone 208, a Bluetooth module 214, and a battery 212. In an instance, the communication aid 100 may be initiated by giving a power supply of at least 5V (DC) employing the battery. Further, the circuit diagram of the communication aid 100 can include, but not limited to: an Analog-to-Digital converter, an external memory device, a Digital Signal Processing (DSP) Chips, and the like. Once, the user presses a key on the electronic keypad 204, the microcontroller 202 detects the key press. The electronic keypad 204 may be connected to the microcontroller 202 employing digital input pins. Once, the microcontroller 202 detects the key presses the microcontroller 202 retrieves the corresponding sentence and audio file from its memory. Further, the microcontroller 202 sends the sentence to the LCD screen 206 for visual display. The LCD screen 206 may be connected to the microcontroller 202 employing digital output pins. The sentence appears on the screen, allowing the user to read the sentence. The amplified audio signal may be sent to the speaker 210. The speaker 210 may emit the spoken sentence, making it accessible to the concerned person 122 nearby. The volume level can be adjusted based on user preferences.
[00045] Further, the microphone may be employed to acquire the slurred speech of the user. The Bluetooth module 214 may aid in communication to an external device associated with the concerned person 122. the audio data may feed into the cloud service by Wi-Fi communication. The Table 1 provides the specification of the components employed in the electronic circuit of the communication aid 100.
Table 1: Specification of the components employed in the electronic circuit of the communication aid
Component Specification
Microcontroller With Analog pins, Digital pins, Portable power source and Memory
Keypad 4x4 matrix keypad (16 buttons)
LCD Display 16x2 character LCD display
Speaker Small speaker or buzzer for audio output
Power Supply 5V DC (powered via battery)
Communication Interface Digital I/O pins (for keypad and LCD)
Audio Output PWM-capable digital pin (for speaker)
[00046] FIG. 2B illustrates an exemplary view of the keypad employed in the proposed communication aid 100 facilitating communication assistance for users with speech impairment, in accordance with an embodiment of the present disclosure.
[00047] In an embodiment, referring to FIG. 2B, the exemplary view of the keypad 102 employed in the proposed communication aid 100 facilitating communication assistance for users with speech impairment. The keypad 102 may include one or more phrases on the one or more keys 104 that are used by the user 120 curated based on a plurality of communication charts issued by a healthcare facility. The one or more phrases are mentioned in the keypad 102 in one or more ways. The one or more ways include any or a combination of one or more communication languages and an image. The one or more communication languages include any or a combination of an English, a regional language of choice, and a Braille.
[00048] In an exemplary embodiment, each phrase on the keypad 102 can be mentioned in English, a regional language of choice. Further, each phrase on the keypad 102 can be mentioned in Braille for people with visual difficulties. Further, each phrase on the keypad 102 can be mentioned in an image for ease of use for illiterate people.
[00049] FIG. 3 illustrates an exemplary flow diagram 300 of the method for facilitating communication assistance for users with speech impairment by a communication aid 100, in accordance with an embodiment of the present disclosure.
[00050] In an embodiment, referring to FIG. 3, the method 300 for facilitating communication assistance for users with speech impairment by the communication aid 100. The method includes step 302 of initialising, the communication aid 100 by providing power employing a power supply. Further, the method includes step 304 of receiving, by a control unit in the communication aid 100 one or more key impressions generated by user 120 with a speech impairment condition employing a keypad 102 in the communication aid 100 to identify a mode to operate the communication aid 100. The mode includes at least one of a mode 1 and a mode 2. Further, the method includes the step 306 of detecting, by the control unit the one or more key impressions to retrieve a specific sentence corresponding to the one or more key impressions to display a communication sentence the user 120 desired to convey to the concerned person 122 on the display unit 110 in the communication aid 100 if the mode identified is the mode 1. Furthermore, the method includes the step 308 of capturing, by the control unit, a voice of the user 120 employing the audio recording unit 108 in the communication aid 100, and processing the voice captured to detect an error in a slurred speech of the user 120 and generate a voice signal employing the voice generation unit 112 by correcting the error in the slurred speech if the mode identified is the mode 2. The voice signal is generated corresponding to a standard speech formed from the slurred speech by correcting the error.
[00051] FIG. 4 illustrates an exemplary process flow diagram 400 of the proposed communication aid 100, in accordance with an embodiment of the present disclosure.
[00052] In an embodiment, referring to FIG. 4, the exemplary process flow diagram 400 of the proposed communication aid 100. At step 402, the communication aid 100 detects the mode to operate based on the key impression generated by the user employing the keys in the keypad 102 and the speech impairment of the user. If the speech impairment of the user is Aphasia type and the mode can be selected as mode 1, in step 402-1. Once, the mode selected is mode 1, at step 406, the control unit 106 may detect the presses generated by the user employing the keypad 102. At step 408, the control unit 106 may maintain a mapping between each key press on the keypad 102 and a specific sentence along which the audio file that has to be played. At step 410, the control unit 106 retrieves the corresponding sentence and audio file from the memory and sends the sentence to the display unit 110 for visual display. Further, the sentence may be displayed on the display unit 110 at step 412-1. Further, at step 414, the control unit 106 converts the sentence into audible speech using a text-to-speech (TTS) technique and triggers the speaker module that amplifies the waveform, producing sound. At step 416-1, the amplified audio signal may be sent to the speaker and the speaker emits the spoken sentence accessible to the concerned person 122. Further, at step 418, the communication aid 100 receives feedback from the user and confirms that the idea of the user is correctly conveyed to the concerned person 122.
[00053] Further, if the speech impairment of the user is Dysarthria type and the mode can be selected as mode 2, in step 402-2. Once, the mode selected is mode 2, at step 418, the control unit 106 may acquire the audio signal containing the slurred speech of the user employing the audio recording unit 108. At step 420, preprocessing techniques may be employed to eliminate potential interference factors from the audio signal. Noise reduction techniques and normalisation procedures may be applied to filter out any unwanted background sounds, to standardize the amplitude of the signal and to ensure a consistent signal level throughout the analysis. Further, at step 422, key acoustic features like pitch, formants and intensity are extracted from the speech signal. Further, individual phonemes may be identified from the slurred speech employing rule-based phoneme detection technique at step 424. The rule-based analysis identifies any phonetic errors or deviations, such as elongated or unclear sounds in the slurred speech. Further, by mapping the detected phonemes to correct speech patterns, the system is able to pinpoint the specific areas of the slurred speech that require correction. Once phonetic errors have been identified, at step 426, formant shifting techniques may be applied to improve the articulation of the slurred speech and bring distorted speech to sound closer to the normal range. Further, the timing of phonemes within the speech signal may adjusted through time normalisation at step 428. The time normalization technique may ensure that the phonemes are aligned with standard speech patterns, restoring the natural rhythm and flow of speech, and thereby improving the overall coherence of the speech. At step 430, the speech with the corrected phonetic may further converted into speech using text-to-speech (TTS) tools and provide a natural-sounding voice signal. The corrected speech may be delivered through the speaker at step 414-2 and the speech may be displayed through the display unit 110 at step 416-2.
[00054] If the specification states a component or feature "may", "can", "could", or "might" be included or have a characteristic, that particular component or feature is not required to be included or have the characteristic.
[00055] As used in the description herein and throughout the claims that follow, the meaning of "a," "an," and "the" includes plural reference unless the context clearly dictates otherwise. Also, as used in the description herein, the meaning of "in" includes "in" and "on" unless the context clearly dictates otherwise.
[00056] It is to be appreciated by a person skilled in the art that while various embodiments of the present disclosure have been elaborated for a communication aid facilitating communication assistance for users with speech impairment and method thereof. However, the teachings of the present disclosure are also applicable to other types of applications as well, and all such embodiments are well within the scope of the present disclosure. However, the communication aid facilitating communication assistance for users with speech impairment, and the method thereof is also equally implementable in other industries as well, and all such embodiments are well within the scope of the present disclosure without any limitation.
[00057] Accordingly, the present disclosure provides a communication aid facilitating communication assistance for users with speech impairment and the method thereof.
[00058] Moreover, in interpreting the specification, all terms should be interpreted in the broadest possible manner consistent with the context. In particular, the terms "comprises" and "comprising" should be interpreted as referring to elements, components, or steps in a non-exclusive manner, indicating that the referenced elements, components, or steps may be present, or utilized, or combined with other elements, components, or steps that are not expressly referenced. Where the specification claims refer to at least one of something selected from the group consisting of A, B, C….and N, the text should be interpreted as requiring only one element from the group, not A plus N, or B plus N, etc.
[00059] While the foregoing describes various embodiments of the disclosure, other and further embodiments of the disclosure may be devised without departing from the basic scope thereof. The scope of the disclosure is determined by the claims that follow. The disclosure is not limited to the described embodiments, versions or examples, which are included to enable a person having ordinary skill in the art to make and use the disclosure when combined with information and knowledge available to the person having ordinary skill in the art.

ADVANTAGES OF THE PRESENT DISCLOSURE
[00060] The present disclosure provides a communication aid facilitating communication assistance for users with speech impairment and the method thereof.
[00061] The present disclosure provides a communication aid and method which provides communication assistance for people suffering from communication disorders by employing a unique and customisable embedded board through multilingual support in English, regional languages, Braille, and images.
[00062] The present disclosure provides a communication aid and method that follows a two-track approach for assisting patients with speech imperfections.
[00063] The present disclosure provides a communication aid and method that operates in two distinct modes, tailored to the needs of the patients with speech imperfections.
[00064] The present disclosure provides a communication aid and method which uses a speech signal processing method for slurred speech detection by analysing pitch, formants, and intensity through rule-based phoneme detection.
[00065] The present disclosure provides a communication aid and method which uses a rule-based phonetic correction system to correct slurred speech by mapping phonetic errors to correct phonemes using time normalisation technique.
[00066] The present disclosure provides a communication aid and method which available provides communication assistance in all regional languages, with both male and female voice output options, ensuring accessibility and ease of use for a broad range of users.
, Claims:1. A communication aid (102) facilitating communication assistance for users with speech impairment, the communication aid (102) comprising:
at least one control unit (106) coupled to one or more functional components, wherein the one or more functional components comprises any or a combination of a keypad (102), an audio recording unit (108), a display unit (110), and a voice generation unit (112), wherein the at least one control unit (106) is configured to:
receive one or more key impressions generated by at least one user (120) with a speech impairment condition employing one or more keys (104) on the keypad (102) to identify a mode to operate the communication aid, wherein the mode comprises at least one of a mode 1 and a mode 2;
detect the one or more key impressions to retrieve a specific sentence corresponding to the one or more key impressions to display a communication sentence the at least one user (120) desired to convey to a concerned person (122) on the display unit (110) if the mode identified is the mode 1; and
capture a voice of the at least one user (120) employing the audio recording unit (108), and process the voice captured to detect an error in a slurred speech of the at least one user (120) and generate a voice signal employing the voice generation unit (112) by correcting the error in the slurred speech if the mode identified is the mode 2, wherein the voice signal is generated corresponding to a standard speech formed from the slurred speech through correcting the error.
2. The communication aid (102) as claimed in claim 1, wherein the communication aid (102) is configured to:
allow the at least one user (120) to communicate with the concerned person (122) by generating the one or more key impressions and displaying the communication sentence on the display unit (110) together with the emission of the voice signal corresponding to the communication by the voice generation unit (112) in a preferred language of the at least one user (120) in mode 1; and
provide the standard speech by converting the slurred speech of the at least one user (120) in mode 2 by extracting one or more features from the slurred speech and processing the slurred speech employing a combination of a signal processing technique and a pre-defined speech guidelines based on the one or more features extracted.
3. The communication aid (102) as claimed in claim 1, wherein the mode is selected by the at least one user (120) based on the speech impairment condition of the at least one user (120),
wherein the speech impairment condition of the at least one user (120) comprises at least one of an impairment affecting the ability to comprehend sentences, and an impairment generating the slurred speech,
wherein the mode 1 is employed by the at least one user (120) with an impairment affecting the ability to comprehend sentences and the mode 2 is employed by the at least one user (120) with the impairment generating the slurred speech,
wherein the impairment affecting the ability to comprehend sentences comprises at least one of an Aphasia, and a Dysphasia, and the impairment generating the slurred speech comprises at least one of a Dysarthria, a tongue-tie, a nasal speech, and a cleft palate speech.
4. The communication aid (102) as claimed in claim 1, wherein the at least one control unit (106) is configured to:
scan continuously a keypad matrix to identify the one or more key impressions generated by the at least one user (120); and
perform a mapping between each of the one or more key impressions with the specific sentence together with an audio file pre-stored in the memory to generate the communication sentence for display,
wherein the communication sentence is displayed on the display unit (110) to enable the concerned person (122) to understand an idea the at least one user (120) desired to convey,
wherein the communication sentence is converted to an audible speech and the voice signal corresponding to the audible speech is emitted out through the voice generation unit (112) to enable the concerned person (122) to understand the idea the at least one user (120) desired to convey.

5. The communication aid (102) as claimed in claim 1, wherein the at least one control unit (106) is configured to:
identify one or more phonemes in the slurred speech based on the one or more features extracted by employing a rule-based phoneme detection technique, wherein the one or more features comprises of at least one of a pitch, a formants and an intensity;
detect the correction required in the slurred speech by performing the mapping between the one or more phonemes identified with a correct speech pattern employing a phonetic analysis;
refine an articulation of the slurred speech by employing a formant shifting techniques once completing the phonetic analysis on the slurred speech; and
align the one or more phonemes in the slurred speech with the standard speech pattern by adjusting a timing of the one or more phonemes within the slurred speech employing a time normalisation technique to generate a standard speech.
6. The communication aid (102) as claimed in claim 1, wherein the keypad (102) comprising one or more phrases on the one or more keys (104) that used by the at least one user (120) curated based on a plurality of communication charts issued by a healthcare facility,
wherein the one or more phrases is mentioned in the keypad (102) in one or more ways,
wherein the one or more ways comprises any or a combination of one or more communication languages and an image,
wherein the one or more communication languages comprises any or a combination of an English, a regional language of choice, and a Braille.
7. The communication aid (102) as claimed in claim 1, wherein the communication aid is configured to:
employ a text-to-speech conversion technique to generate the voice signal from the communication sentence in the mode 1 and the standard speech in the mode 2.
8. The communication aid (102) as claimed in claim 1, wherein the communication aid is a portable device and customizable with one or more easy to understand images based on a user preference.
9. A method (300) for facilitating communication assistance for users with speech impairment by a communication aid, the method (300) comprising:
initialising (302), the communication aid by providing power employing a power supply;
receiving (304), by at least one control unit (106) in the communication aid one or more key impressions generated by at least one user (120) with a speech impairment condition employing a keypad (102) in the communication aid to identify a mode to operate the communication aid, wherein the mode comprises at least one of a mode 1 and a mode 2;
detecting (306), by the at least one control unit (106) the one or more key impressions to retrieve a specific sentence corresponding to the one or more key impressions to display a communication sentence that at least one user (120) desired to convey to a concerned person (122) on a display unit (110) in the communication aid if the mode identified is the mode 1; and
capturing (308), by the at least one control unit (106) a voice of the at least one user (120) employing an audio recording unit (108) in the communication aid, and processing the voice captured to detect an error in a slurred speech of the at least one user (120) and generate a voice signal employing the voice generation unit (112) by correcting the error in the slurred speech if the mode identified is the mode 2, wherein the voice signal is generated corresponding to a standard speech formed from the slurred speech through correcting the error.

Documents

NameDate
202441086516-FORM-8 [12-11-2024(online)].pdf12/11/2024
202441086516-COMPLETE SPECIFICATION [09-11-2024(online)].pdf09/11/2024
202441086516-DECLARATION OF INVENTORSHIP (FORM 5) [09-11-2024(online)].pdf09/11/2024
202441086516-DRAWINGS [09-11-2024(online)].pdf09/11/2024
202441086516-EDUCATIONAL INSTITUTION(S) [09-11-2024(online)].pdf09/11/2024
202441086516-EVIDENCE FOR REGISTRATION UNDER SSI [09-11-2024(online)].pdf09/11/2024
202441086516-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [09-11-2024(online)].pdf09/11/2024
202441086516-FORM 1 [09-11-2024(online)].pdf09/11/2024
202441086516-FORM 18 [09-11-2024(online)].pdf09/11/2024
202441086516-FORM FOR SMALL ENTITY(FORM-28) [09-11-2024(online)].pdf09/11/2024
202441086516-FORM-9 [09-11-2024(online)].pdf09/11/2024
202441086516-POWER OF AUTHORITY [09-11-2024(online)].pdf09/11/2024
202441086516-REQUEST FOR EARLY PUBLICATION(FORM-9) [09-11-2024(online)].pdf09/11/2024
202441086516-REQUEST FOR EXAMINATION (FORM-18) [09-11-2024(online)].pdf09/11/2024

footer-service

By continuing past this page, you agree to our Terms of Service,Cookie PolicyPrivacy Policy  and  Refund Policy  © - Uber9 Business Process Services Private Limited. All rights reserved.

Uber9 Business Process Services Private Limited, CIN - U74900TN2014PTC098414, GSTIN - 33AABCU7650C1ZM, Registered Office Address - F-97, Newry Shreya Apartments Anna Nagar East, Chennai, Tamil Nadu 600102, India.

Please note that we are a facilitating platform enabling access to reliable professionals. We are not a law firm and do not provide legal services ourselves. The information on this website is for the purpose of knowledge only and should not be relied upon as legal advice or opinion.