Riders of mass transit are exposed to noise at levels that may exceed recommended limits, and thus may experience noise-induced hearing loss given sufficient exposure duration times, reports a new study. Researchers evaluated the noise levels of a representation of New York City mass transit systems (subways, buses, ferries, tramways and commuter railways) during June and July 2007. Subway cars and platforms had the highest associated equivalent continuous average and maximum noise levels, but all systems showed some potential for noise exposure. The study's authors suggest, "Engineering noise-control efforts, including increased transit infrastructure maintenance and the use of quieter equipment, should be given priority over use of hearing protection, which requires rider motivation and knowledge of how and when to wear it.
We humans prefer to be addressed in our right ear and are more likely to perform a task when we receive the request in our right ear rather than our left. In a series of three studies, looking at ear preference in communication between humans, Dr. Luca Tommasi and Daniele Marzoli from the University "Gabriele d'Annunzio" in Chieti, Italy, show that a natural side bias, depending on hemispheric asymmetry in the brain, manifests itself in everyday human behavior. Their findings were just published online in Springer's journal Naturwissenschaften. One of the best known asymmetries in humans is the right ear dominance for listening to verbal stimuli, which is believed to reflect the brain's left hemisphere superiority for processing verbal information.
New technology to hear vibrations through the skull bone has been developed at Chalmers University of Technology. Besides investigating the function of a new implantable bone conduction hearing aid, Sabine Reinfeldt has studied the sensitivity for bone conducted sound and also examined the possibilities for a two-way communication system that is utilizing bone conduction in noisy environments. A new Bone Conduction Implant (BCI) hearing system was investigated by Sabine Reinfeldt: "This hearing aid does not require a permanent skin penetration, in contrast to the Bone-Anchored Hearing Aids (BAHAs) used today." Measurements showed that the new BCI hearing system can be a realistic alternative to the BAHA.
HearAtLast To Launch Exclusive Groundbreaking Neuro-CompensatorTM Technology Hearing Aids From VitaSound
HearAtLast Holdings, Inc. (PINKSHEETS: HRAL), a leading provider of suitable affordable solutions to clients with hearing needs in the billion dollar hearing loss market, announced that in keeping with its tradition of bringing innovative new products to consumers, the Company announces the unveiling of breakthrough hearing products based on the Neuro-Compensator™ algorithm technology from VitaSound Audio. The NEURO-COMPENSATOR™ hearing instruments are powered by the newest groundbreaking neuro-biological technology designed to optimize the auditory nerve output. Based on many years of research at McMaster University into the electrical signals that are transmitted to the brain by the auditory nerves in healthy and impaired ears, this patented technology is designed to significantly improve the perceived audio quality in hearing devices.
Doctors may get a new arsenal for meningitis treatment and the war on drug-resistant bacteria and fungal infections with novel peptide nanoparticles developed by scientists at the Institute of Bioengineering and Nanotechnology (IBN) of Singapore and reported in Nature Nanotechnology. The stable bioengineered nanoparticles devised at IBN effectively seek out and destroy bacteria and fungal cells that could cause fatal infections and are highly therapeutic. Major brain infections such as meningitis and encephalitis are a leading cause of death, hearing loss, learning disability and brain damage in patients. IBN's peptide nanoparticles, on the other hand, contain a membrane-penetrating component that enables them to pass through the blood brain barrier to the infected areas of the brain that require treatment.
Parents and children giving or receiving an electronic device with music this holiday season should give their ears a gift as well by pre-setting the maximum decibel level to somewhere between one-half and two-thirds maximum volume. Any sound over 85 decibels (dBs) exceeds what hearing experts consider to be a safe level and some MP3 players are programmed to reach levels as high as 120 dBs at their maximum. Vanderbilt Bill Wilkerson Center Director Ron Eavey, M.D., who also chairs the Department of Otolaryngology, says the new generation is especially susceptible to hearing loss when they listen to music with headphones or earbuds either too long or too loud.
A front portion of the brain that handles tasks like decision-making also helps decipher different phonetic sounds, according to new Brown University research. This section of the brain - the left inferior frontal sulcus - treats different pronunciations of the same speech sound (such as a 'd' sound) the same way. In determining this, scientists have solved a mystery. "No two pronunciations of the same speech sound are exactly alike. Listeners have to figure out whether these two different pronunciations are the same speech sound such as a 'd' or two different sounds such as a 'd' sound and a 't' sound, " said Emily Myers, assistant professor (research) of cognitive and linguistic sciences at Brown University.
A team of researchers from the University of AlcalĂ de Henares (UAH) has shown scientifically that human beings can develop echolocation, the system of acoustic signals used by dolphins and bats to explore their surroundings. Producing certain kinds of tongue clicks helps people to identify objects around them without needing to see them, something which would be especially useful for the blind. "In certain circumstances, we humans could rival bats in our echolocation or biosonar capacity", Juan Antonio MartĂ nez, lead author of the study and a researcher at the Superior Polytechnic School of the UAH, tells SINC. The team led by this scientist has started a series of tests, the first of their kind in the world, to make use of human beings' under-exploited echolocation skills.
A new study from Canada shows that our skin helps us hear speech by sensing the puffs of air that the speaker produces with certain sounds. The study is the first to show that when we are in conversation with another person we don't just hear their sounds with our ears and use our eyes to interpret facial expressions and other cues (a fact that is already well researched), but we also use our skin to "perceive" their speech. The study is the work of professor Bryan Gick from the Department of Linguistics, University of British Columbia, in Vancouver, Canada and PhD student Donald Derrick. A paper on their work was published in Nature on 26 November. Gick and Derrick found that pointing puffs of air at the skin can bias the hearer's perception of spoken syllables.
It is relatively common for listeners to "hear" sounds that are not really there. In fact, it is the brain's ability to reconstruct fragmented sounds that allows us to successfully carry on a conversation in a noisy room. Now, a new study helps to explain what happens in the brain that allows us to perceive a physically interrupted sound as being continuous. The research, published by Cell Press in the November 25 issue of Neuron provides fascinating insight into the constructive nature of human hearing. "In our day-to-day lives, sounds we wish to pay attention to may be distorted or masked by background noise, which means that some of the information gets lost.