Thursday, October 18, 2012

Wave Studio Tips

*sound science series # 7


Recording a new audio file !!
(RAW, WMA or WAV data file)
Sample rate (frequency): 
(frequency may be chosen from 8000 Hz to 96000 Hz)
# 11025 Hz suitable for voice recording,
# 22050 Hz suitable for tape quality recording, 
# 44100 Hz suitable for CD quality recording.

Sampling size:
8 bit     cassette tape quality (lower sound quality),
16 bit   CD quality,
24 bit   higher sound quality.


Channel 
Mono/Stereo options
* a Wave file with better sound quality requires a larger storage space because of it's higher sampling rate and size.


Mixing:
When mixing 8 bit data with 16 bit wave data, convert attributes via status bar.


Recording a new audio file:
Menu > File > New
Select: sampling rate, bit depth, recording device, playback device.
Menu > Audio > Record
File > Save
Enter the new file name and then click Save button


Using Direct X audio plug ins:
are software components that let you apply special effects to your audio files. You must install the plug-ins on your computer before you can use this feature.

Audio clean-up:
Menu> Audio Clean-up
Hiss removal 30 %
Click removal
Original Hiss level


Reverse:
Menu > Task > Reverse > Channels (select) > OK
*tips: unusual sound effects, if you mix a wave with it's reversed form.


Normalize:
Menu > Task > Normalize

Echo:
Task > Echo > Add Echo ( Echo Magnitude and Echo Delay) > OK

Fade-in and fade-out:
fading in (soft to loud)

fading out (loud to soft)
*effect to your entire audio file.


Invert Waveform:
Task > Invert Waveform > (select channels) > OK
This specific effect inverts the Waveform along it's horizontal axis.
*tips: you can create an unusual effect if you invert only one channel of a Stereo file.


Pan Left-to-Right or Pan Right-to-Left:
this effect applies to the Stereo files only.
*tips: This feature is useful if you want to simulate movement of a sound source from one end of the sound stage to the other.

Phase shift:
*tips: You can convert a Mono audio file to a Stereo audio file and apply this phase shift-effect. This will give the converted Mono file a 'pseudo-stereo effect'.



Volume:
Task > Volume
magnitude greater than 100 %, increase the Volume.
magnitude less than 100 %, decrease the Volume.
(Stereo = effects both the channels).

Adding markers:
on the Time > Ruler

right click the location, add Marker

or

Removal of markers 
Menu > Edit > Markers > Drop markers


Monitoring Volume:
peak indicator - how loud sound is 
valley indicator - how soft it is
five scales representing different volume range
( - 90 db to Infinity, - 24 db to Infinity etc.)
Options > Volume Meter
- 90 db to infinity,
- 78 db to infinity,
- 60 db to infinity,
- 42 db to infinity,
- 24 db to infinity.

Specifying Indicators:
Right click on the volume meter, lists all the indicators.

Changing Audio Set Up:
Menu > Options > Preferences > Audio: select your required audio devices for both Recording device and Play back device,
click OK.

Changing Editing options (undo operations):
Options > Preferences > Editing tab.



Monday, October 15, 2012

Acoustic Resonance

*sound science series # 6
Natural Frequency: The frequency at which an object vibrates when allowed to do so freely.
All objects vibrate when they are disturbed. When each object vibrates, it tends to vibrate at a particular frequency or set of frequencies. We call these frequencies an object's natural frequencies.
Example: a tuning fork will vibrate at only one frequency while a clarinet will vibrate at a set of frequencies.

Acoustic Resonance: Resonance involving sound waves
When objects or air particles vibrate, if the amplitude of the vibrations is large enough and if the frequency of the vibration is within the human hearing range, then the object will produce sound waves which are audible.
Example of acoustic resonance in air columns closed at one end;
*Our own voice (the vocal tract acts as an air column closed at one end with the open end near the lips.
*Music produced by blowing into plastic bottles filled with water.
*Music produced by a clarinet.
Resonance: The transfer of energy of vibration from one object to another having the same natural frequency.
Vibration: Repeated pattern of motion, also called a cycle.

Some musical instruments based on resonance phenomenon:
 instruments with vibrating strings (which would include guitar strings, violin strings, and piano strings), open-end air column instruments (which would include the brass instruments such as the trombone and woodwinds such as the flute and the recorder), and closed-end air column instruments (which would include some organ pipe and the bottles of a pop bottle orchestra). A fourth category - vibrating mechanical systems (which includes all the percussion instruments). These instrument categories may be unusual to some; they are based upon the commonalities among their standing wave patterns and the mathematical relationships between the frequencies that the instruments produce.
 If the frequency of the external force is equal to the natural frequency of the body (or to it's integral multiple), then the amplitude of the forced vibrations (or oscillations) of the body becomes quite large. This phenomenon is called resonance.
Thus, resonance is particular case of forced vibrations (or oscillations).
Examples:
a. Mechanical reonance.
   i. army passing over a bridge.
   b. sound resonance.
   i. resonance box
   ii. vibration of strings
   iii. dtermination of frequency
   iv. pouring of water in a vessel
   v. vibration of surrounding
   vi. resonator
b. Electromagnetic resonance
   i. radio

The Dark Side of Resonance,  
The Tacoma-Narrows Bridge 
Every powerful phenomenon in nature has its dark side and resonance is no exception. It's best experienced in moderation. Taken to an extreme, resonance causes things to break catastrophically. For example, when an opera singer with a very loud voice hits the right frequency she can cause a champagne glass to resonate and break. 
On the morning of November 7, 1940, the four month old Tacoma Narrows Bridge began to oscillate dangerously up and down. A reporter drove out on the bridge with his cocker spaniel in the car. The bridge was heaving so violently that he had to abandon his car and crawl back to safety on his hands and knees.
At about 11:00 the bridge tore itself apart and collapsed. It had been designed for winds of 120 mph and yet a wind of only 42 mph caused it to collapse. How could this happen? No one knows exactly why. However, the experts do agree that somehow the wind caused the bridge to resonate. It was a shocking calamity although the only loss of life was the cocker spaniel.
example of the forced vibration: say if the natural frequency of a glass cup is 497.955 Hz. Producing the same frequency from a guitar, vibrations (resonance), glass cup can be broken. So this 497.955 Hz is a 'Critical Frequency' of the glass bowl here in this example.


                                       Nikola Tesla (1856 - 1943) - Master of Resonanc

Nikola Tesla - Master of Resonance: Tesla was a genius who was obsessed with resonance. No discussion of resonance could be complete with out talking about Tesla.
 It was an innocent experiment. Tesla had attached a small vibrator to an iron column in his New York City laboratory and started it vibrating. At certain frequencies specific pieces of equipment in the room would jiggle. Change the frequency and the jiggle would move to another part of the room. Unfortunately, he hadn't accounted for the fact that the column ran downward into the foundation beneath the building. His vibrations were being transmitted all over Manhattan.
 
Sharp and flat resonance: Sharpness of resonance is, in a way, a measure of the rate of  fall of amplitude from it's maximum value at resonant frequency, on either side of it. The sharper the fall in amplitude, the sharper the resonance.
 Law of conservation of energy: In the case of a falling object, initial potential energy transformed into other forms of energy, i.e. into heat or sound
(or in other word, when sound is absorbed, it turns into heat).
 *Note: all pictures thankfully shared from various sources.

Sunday, October 14, 2012

Physics Of Sound

*sound science series # 5

Sound is made up of changes in air pressure in the form of waves. Frequency is the property of sound that most determines pitch. The frequencies an ear can hear are limited to a specific range of frequencies.
Mechanical vibrations perceived as sound travel through all forms of matter: gases, liquids, solids, and plasmas. The matter that supports the sound is called the medium. Sound cannot travel through a vacuum.
The audible frequency range for humans is typically given as being between about 20 Hz and 20,000 Hz (20 kHz). High frequencies often become more difficult to hear with age. Other species have different hearing ranges. For example, some dog breeds can perceive vibrations up to 60,000 Hz.
There are four main parts to a sound wave: wavelength, period, amplitude, and frequency.
frequency: how many complete waves pass a set point a second. (measured in Hetz, Hz)
Amplitude: the height of the wave from the mid line to the peak.
Wavelength:the distance from 1 peak to the next.


Frequency
Every cycle of sound has one condensation, a region of increased pressure, and one rarefaction, a region where air pressure is slightly less than normal. The frequency of a sound wave is measured in hertz. Hertz (Hz) indicate the number of cycles per second that pass a given location. If a speaker's diaphragm is vibrating back and forth at a frequency of 900 Hz, then 900 condensations are generated every second, each followed by a rarefaction, forming a sound wave whose frequency is 900 Hz.


 Wavelength and Period (T; s (second))
The wavelength is the horizontal distance between any two successive equivalent points on the wave. That means that the wavelength is the horizontal length of one cycle of the wave. The period of a wave is the time required for one complete cycle of the wave to pass by a point. So, the period is the amount of time it takes for a wave to travel a distance of one wavelength
Time taken by two consecutive compression or rarefaction to cross a fixed point is called the time period of the wave.
or time taken for one complete oscillation in the density of the medium is called the time period of the sound wave.

Amplitude (A; unit as of density or pressure)
 The amplitude of a sound is represented by the height of the wave. When there is a loud sound, the wave is high and the amplitude is large. Conversely, a smaller amplitude represents a softer sound. A decibel is a scientific unit that measures the intensity of sounds. The softest sound that a human can hear is the zero point. When the sound is twice as loud, the decibel level goes up by six. Humans speak normally at 60 decibels.


Pitch
How the brain interprets the frequency of an emitted sound is called the pitch. We already know that the number of sound waves passing a point per second is the frequency. The faster the vibrations the emitted sound makes (or the higher the frequency), the higher the pitch. Therefore, when the frequency is low, the sound is lower.


Speed of Sound
The speed of sound depends on the properties of the medium through which it travels. The speed of sound in a medium depends also on temperature and pressure of the medium.
The speed of sound decreases when we go from solid to gaseous state. In any medium as we increase the temperature the speed of sound increases.

Speed of Sound: at 21C (70°F), 344 meters per second, 1,129 ft per second, 1,233kph, 770mph. At freezing, the numbers are 331 m/s or 1087 ft/s. The Speed of sound in water is 1480 m/s or 4856 ft/s. More than 3,000 miles per hour.
Wavelength, Frequency & Speed of Sound
Wavelength x Frequency = Speed of Sound, or,
Wavelength = Speed of Sound / Frequency, and
Frequency = Speed of Sound / Wavelength
As frequency increases (becomes higher), the wavelength becomes longer.
As frequency decrease (becomes lower), the wavelength becomes shorter.

A chart of a few selected frequencies and their correlative wavelengths
 Frequency
    in Hertz
Wavelength
in feet and inches or metres and centimetres
20Hz
56.5ft
17.22m
50Hz
22.6ft
6.89m
100Hz
11.3ft
3.44m
400Hz
2.83ft
0.86m
1,000Hz
1.13ft
0.34m
5,000Hz
2.71in
6.89cm
10,000Hz
1.36in
3.44cm
20,000Hz
0.68in
1.72cm


Quality or Timber is that characteristic which enables us to distinguish one sound from another having the same pitch and loudness. 
The sound which is more pleasant is said to be of a rich Quality.


Tone and Note; A sound of single frequency is called a Tone.
A sound which is produced due to a mixture of several frequencies is called a Note and is pleasant to listen to.


Intensity, the amount of sound energy passing each second through unit area is called the intensity of sound. 


Echo; the sensation of sound persists in ou brain for about 0.1 second. To hear a distinct echo the time interval between the original sound and te reflected one must be at least 0.1 second.
speed of sound 344m/s at 22° C in the air
344 m/s  ͯ  0.1 s ꞊ 34.4 m
the minimum distance of the obstacle from the source of sound must be half of this distance, that is 17.2 m. 

Human echolocation
Human echolocation is the ability of humans to detect objects in their environment by sensing echoes from those objects. This ability is used by some blind people to navigate within their environment. They actively create sounds, such as by tapping their canes, lightly stomping their foot or by making clicking noises with their mouths. It can however also be fed in to the human nervous system as a new sensory experience. Human echolocation is similar in principle to active sonar and to the animal echolocation employed by some animals, including bats and dolphins.
By interpreting the sound waves reflected by nearby objects, a person trained to navigate by echolocation can accurately identify the location and sometimes size of nearby objects and not only use this information to steer around obstacles and travel from place to place, but also detect small movements relative to objects.
However, in the case of human clicking, since humans make sounds with much lower frequencies (slower rates), such human echolocation can only picture comparatively much larger objects than other echolocating animals.

Mechanics
Vision and hearing are closely related in that they can both process reflected signals. Vision processes photons as they travel from their source, bounce off surfaces throughout the environment and enter the eyes. Similarly, the auditory system processes sound waves as they travel from their source, bounce off surfaces and enter the ears. Both systems can extract a great deal of information about the environment by interpreting the complex patterns of reflected energy that they receive. In the case of sound, these waves of reflected energy are called "echoes".
Echoes and other sounds can convey spatial information that is comparable in many respects to that conveyed by light. With echoes, a blind traveler can perceive very complex, detailed, and specific information from distances far beyond the reach of the longest cane or arm. Echoes make information available about the nature and arrangement of objects and environmental features such as overhangs, walls, doorways and recesses, poles, ascending curbs and steps, planter boxes, pedestrians, fire hydrants, parked or moving vehicles, trees and other foliage, and much more. Echoes can give detailed information about location (where objects are), dimension (how big they are and their general shape), and density (how solid they are). Location is generally broken down into distance from the observer and direction (left/right, front/back, high/low). Dimension refers to the object's height (tall or short) and breadth (wide or narrow).

Notable individuals who employ echolocation:
James Holman, Daniel Kish, Ben Underwood, Tom De Witte, Dr. Lawrence Scadden, Lucas Murray, Kevin Warwick

*Note: all pictures thankfully shared from various sources..
 






Thursday, October 11, 2012

Cell Receptors 2012

Nobel prize in chemistry 2012 for work on cell receptors

this may be the first Nobel prize in chemistry awarded to two medical doctors.

Americans Robert J Lefkowitz and Brian K Kobilka have won this year's chemistry Nobel for their work on G-protein-coupled receptors, which allow cells to sense light, flavour, odour and receive signals from hormones and neurotransmitters

Pictures of the winners – Robert Lefkowitz (left) and Brian Kobilka – are projected on a screen as Staffan Normark (centre) of the Royal Swedish Academy of Sciences makes the announcement in Stockholm. Photograph: Jonathan Nackstrand/AFP/Getty Images
The award of this year's chemistry Nobel prize to Robert Lefkowitz (left) and Brian Kobilka demonstrates how the distinctions between the disciplines of chemistry and biology have been breaking down. Photograph: AFP/Getty Images

Biological emphasis of this year's Nobel prize in chemistry. The field of chemical biology is burgeoning because at its heart, at the heart of certainly cell biology, is an understanding at the molecular level of what's going on and that's chemistry essentially. So other sorts of chemistry are still going on and still very important, but this level of understanding which has been made possible by advances in techniques over the last 20 years or so is crucial to mankind....biological field, because it's so crucial that we understand the molecular processes that are going on in cells in animal and human bodies.

Lefkowitz in his lab at Duke University in Durham, North Carolina, in 1996. Photograph: AP

The News !!
Lefkowitz told a news conference by telephone he was asleep when the phone call came from Sweden.

"I did not hear it - I must share with you that I wear earplugs to sleep. So my wife gave me an elbow. So there it was, a total shock and surprise," he said.

He said he has no idea what he will do with the prize money he shares with Kobilka, who spent the early part of his career in Lefkowitz' lab at Duke.

"It's funny. I can honestly tell you it was about an hour after this all hit, it dawned on me for the first time that it's a lot of money," he told Reuters later from his home in Durham, North Carolina.

"It's over a mill dollars to share with Brian Kobilka. I haven't a clue. As they say, it ain't about the money."
Brian Kobilka speaking on the phone after the announcement as the world's media converge on his home in Palo Alto. Photograph: Linda A Cicero/AFP/Getty Images

Kobilka said when the phone call first came in from Stockholm, he thought it was a crank call or a wrong number."Then it rang again. You get congratulated by these members of the Swedish committee and things happen pretty fast," he said in a telephone interview from his home In Palo Alto, California.
He said he was being recognized primarily for his work in determining the structure of the receptors and what they look like in three dimensions.
 

"Probably the most high profile piece of work was published last year, where we have a crystal structure of the receptor activating the G protein. It's caught in the act of signaling across the membrane," he said.

Nutshell
#
Two American scientists won the 2012 Nobel Prize for chemistry on Wednesday for research into how cells respond to external stimuli that is helping to develop better drugs to fight diseases such as diabetes, cancer and depression.

# The Royal Swedish Academy of Sciences said the 8 million crown ($1.2 million) prize went to Robert Lefkowitz, 69, and Brian Kobilka, 57, for discovering the inner workings of G-protein-coupled receptors, which allow cells to respond to chemical messages such as adrenaline rushes.

# “Around half of all medications act through these receptors, among them beta blockers, antihistamines and various kinds of psychiatric medications,” the Nobel Prize committee said.

# Working out better ways to target the receptors, known as GPCRs, is an area of keen interest to pharmaceutical and biotechnology companies.


# GPCRs are linked to a wide range of diseases, since they play a central role in many biological functions in the body, but developing new drugs to target them accurately has been difficult because of a lack of fundamental understanding as to how they function. Experts say the work of the Nobel Prize winners has opened the door to making better medicines.

# Drugs targeting GPCRs have potential in treating illnesses involving the central nervous system, heart conditions, inflammation and metabolic disorders.

# "This ground-breaking work spanning genetics and biochemistry has laid the basis for much of our understanding of modern pharmacology as well as how cells in different parts of living organisms can react differently to external stimulation,"

# "Out of the roughly 1,400 drugs that exist in the world, about 1,000 of them are little pills that you consume, and the majority of these are based in these receptors,

Transmembrane receptor:E=extracellular space; I=intracellular space; P=plasma membrane

 

This from the Nobel Assembly press material:

Your body is a fine-tuned system of interactions between billions of cells. Each cell has tiny receptors that enable it to sense its environment, so it can adapt to new situtations. Robert Lefkowitz and Brian Kobilka are awarded the 2012 Nobel Prize in Chemistry for groundbreaking discoveries that reveal the inner workings of an important family of such receptors: G-protein–coupled receptors.


The seven-transmembrane α-helix structure of a G-protein-coupled receptor
                      
                      And more to share:
For a long time, it remained a mystery how cells could sense their environment. Scientists knew that hormones such as adrenalin had powerful effects: increasing blood pressure and making the heart beat faster. They suspected that cell surfaces contained some kind of recipient for hormones. But what these receptors actually consisted of and how they worked remained obscured for most of the 20th Century.

Lefkowitz started to use radioactivity in 1968 in order to trace cells' receptors. He attached an iodine isotope to various hormones, and thanks to the radiation, he managed to unveil several receptors, among those a receptor for adrenalin: β-adrenergic receptor. His team of researchers extracted the receptor from its hiding place in the cell wall and gained an initial understanding of how it works.
The team achieved its next big step during the 1980s. The newly recruited Kobilka accepted the challenge to isolate the gene that codes for the β-adrenergic receptor from the gigantic human genome. His creative approach allowed him to attain his goal. When the researchers analyzed the gene, they discovered that the receptor was similar to one in the eye that captures light. They realized that there is a whole family of receptors that look alike and function in the same manner.

Today this family is referred to as G-protein–coupled receptors. About a thousand genes code for such receptors, for example, for light, flavour, odour, adrenalin, histamine, dopamine and serotonin. About half of all medications achieve their effect through G-protein–coupled receptors.

The studies by Lefkowitz and Kobilka are crucial for understanding how G-protein–coupled receptors function. Furthermore, in 2011, Kobilka achieved another break-through; he and his research team captured an image of the β-adrenergic receptor at the exact moment that it is activated by a hormone and sends a signal into the cell. This image is a molecular masterpiece – the result of decades of research.

Wednesday, October 10, 2012

Ultrasonic Sound

*sound science series # 4 
Ultrasonic Sound
The term "ultrasonic" applied to sound refers to anything above the frequencies of audible sound, and nominally includes anything over 20,000 Hz. Frequencies used for medical diagnostic ultrasound scans extend to 10 MHz and beyond. 

Approximate frequency ranges corresponding to ultrasound, with rough guide of some applications


Uses of Ultrasonic sound:  Sounds in the range 20-100kHz are commonly used for communication and navigation by bats, dolphins, and some other species. Much higher frequencies, in the range 1-20 MHz, are used for medical ultrasound. Such sounds are produced by ultrasonic transducers. A wide variety of medical diagnostic applications use both the echo time and the Doppler shift of the reflected sounds to measure the distance to internal organs and structures and the speed of movement of those structures. Typical is the echocardiogram, in which a moving image of the heart's action is produced in video form with false colors to indicate the speed and direction of blood flow and heart valve movements. Ultrasound imaging near the surface of the body is capable of resolutions less than a millimeter. The resolution decreases with the depth of penetration since lower frequencies must be used (the attenuation of the waves in tissue goes up with increasing frequency.) The use of longer wavelengths implies lower resolution since the maximum resolution of any imaging process is proportional to the wavelength of the imaging wave.
Radar and ultrasonic sound waves made by dolphins Even though dolphins live in the darkness of the sea, they can still search for food in the form of fish.
They do this by emitting sound that cannot be detected by the human ear (ultrasonic sound waves) and catching the sound that comes back like an echo. Even though they cannot see their food, they can tell their direction and distance by using ultrasonic sound waves.
The way radar works and the way that dolphins use ultrasonic sound waves are much the same.
Radars used at airports emit radio waves from an antenna and catch the radio waves that are reflected off aircraft.
Through this, the direction and distance of aircraft can be detected.
Principle of an active sonar
Ultrasonic range finding: A common use of ultrasound is in range finding; this use is also called SONAR, (sound navigation and ranging). This works similarly to RADAR (radio detection and ranging): An ultrasonic pulse is generated in a particular direction. If there is an object in the path of this pulse, part or all of the pulse will be reflected back to the transmitter as an echo and can be detected through the receiver path. By measuring the difference in time between the pulse being transmitted and the echo being received, it is possible to determine how far away the object is.
Bats and ultrasound: Bats use ultrasonic sound for navigation. Their ability to catch flying insects while flying full speed in pitch darkness is astounding. Their sophisticated echolocation permits them to distinguish between a moth (supper) and a falling leaf.
About 800 species of bats grouped into 17 families. The ultrasonic signals utilized by these bats fall into three main categories. 1. short clicks, 2. Frequency-swept pulses, and 3. constant frequency pulses. There are two suborders, Megachiroptera and Microchiroptera. Megas use short clicks, Micros use the other two. Tongue clicks produce click pairs separated by about 30ms, with 140-430 ms between pairs. (Sales and Pye, Ultrasonic Communication by Animals). 10-60 kHz in frequency swept clicks. One kind of bat, the verspertilionidae, have frequency swept pulses 78 kHz to 39 kHz in 2.3 ms. Emits pulses 8 to 15 times a second, but can increase to 150-200/s when there is a tricky maneuver to be made. 
Bats use ultrasounds to navigate in the darkness.



Some more with ultrasound ability: The upper frequency limit in humans (approximately 20 kHz) is due to limitations of the middle ear, which acts as a low-pass filter. Ultrasonic hearing can occur if ultrasound is fed directly into the skull bone and reaches the cochlea through bone conduction without passing through the middle ear.
       It is a fact in psychoacoustics that children can hear some high-pitched sounds that older adults cannot hear, because in humans the upper limit pitch of hearing tends to become lower with age , which may be due to the considerable variation of age-related deterioration in the upper hearing threshold.
                  Many animals—such as dogs, cats, dolphins, bats, and mice—have an upper frequency limit that is higher than that of the human ear and thus can hear ultrasound. This is why a dog whistle can be heard by a dog.

Ultrasonic technology in wide range of applications:
http://www.angelfire.com/nj3/soundweapon/ultrales.htm
  
*Note: all pictures thankfully shared from various sources..

Infrasonic Sound

 *sound science series # 3

Infrasound  

Infrasound is a sound that is lower in frequency than 20 Hz (Hertz) or cycles per second, the "normal" limit of human hearing. 
                      Hearing becomes gradually less sensitive as frequency decreases, so for humans to perceive infrasound, the sound pressure must be sufficiently high. The ear is the primary organ for sensing infrasound, but at higher levels it is possible to feel infrasound vibrations in various parts of the body.
                 The study of such sound waves is sometimes referred to as infrasonics, covering sounds beneath 20 Hz down to 0.001 Hz.

Sources: Infrasound sometimes results naturally from severe weather, surf, lee waves, avalanches, earthquakes, volcanoes, bolides, waterfalls, calving of icebergs, aurorae, lightning and upper-atmospheric lightning. Nonlinear ocean wave interactions in ocean storms produce pervasive infrasound vibrations around 0.2 Hz, known as microbaroms. According to the Infrasonics Program at the NOAA, infrasonic arrays can be used to locate avalanches in the Rocky Mountains, and to detect tornadoes on the high plains several minutes before they touch down.
                   Infrasound can also be generated by human-made processes such as sonic booms and explosions (both chemical and nuclear), by machinery such as diesel engines and older designs of down tower wind turbines and by specially designed mechanical transducers (industrial vibration tables) and large-scale subwoofer loudspeakers such as rotary woofers. The Comprehensive Nuclear-Test-Ban Treaty Organization Preparatory Commission uses infrasound as one of its monitoring technologies (along with seismic, hydroacoustic, and atmospheric radionuclide monitoring).
              
Whales, elephants, hippopotamuses, rhinoceros, giraffes, okapi, and alligators are known to use infrasound to communicate over distances—up to hundreds of miles in the case of whales. In particular, the Sumatran Rhinoceros has been shown to produce sounds with frequencies as low as 3 Hz which have similarities with the song of the humpback whale. The roar of the tiger contains infrasound of 18 Hz and lower, and the purr of felines is reported to cover a range of 20 to 50 Hz. It has also been suggested that migrating birds use naturally generated infrasound, from sources such as turbulent airflow over mountain ranges, as a navigational aid. Elephants, in particular, produce infrasound waves that travel through solid ground and are sensed by other herds using their feet, although they may be separated by hundreds of kilometres.


Elephants use infrasonic sounds to communicate


wind turbines produce major infrasound

Battle of Jericho: what about sound guns and other directed noise-based attacks

Low frequency acoustic waves were first discovered after the eruption of the Krakatoa (Indonesia) in August 27, 1883. Due to its low frequency content, the infrasound traveled up to four times around the globe while reaching altitudes over 100 km.

storms and ocean waves generate infrasound

      Animal reactions to infrasonics:   
  •  Animals have been known to perceive the infrasonic waves going through the earth by natural disasters and can use these as an early warning.
  •   Infrasound may also be used for long-distance communication in African elephants.  
  •  
    Human reactions to infrasonics:
    1. Sound frequencies below 20 Hz affect on our internal body organs. Each organ is susceptible to these subsonic frequencies. They start vibrating when their critical frequency is reached.
    2. Our human body behaves like a resonating chamber ( like an ear) for the infrasonic sound.
    3. These silent, mysterious sound affect thousands of our internal organs in our abdominal or cranial cavity.
     
Aweful feeling: One study has suggested that infrasound may cause feelings of awe or fear in humans. It was also suggested that since it is not consciously perceived, it can make people feel vaguely that supernatural events are taking place.

Ghost feelings: Research by Vic Tandy, a lecturer at Coventry University, suggested that an infrasonic signal of 19 Hz might be responsible for some ghost sightings.
           More recent research seem to indicate that, while infrasound does seem to have effects on human emotions, some of Tandy's findings are inaccurate.

 Alarming infrasonics: Waves of infrasound are invisible, but slam into living tissue and physical structures with great force.
The sensation vibrates internal organs and buildings, flattening objects as the sonic wave strikes.
At certain pitches, it can explode matter.

          Long pipe organs, such as those found in churches and cathedrals produce infrasound. In one UK study, the extreme bass frequencies instilled strange feelings at a concert hall. Effects were "extreme sense of sorrow, coldness, anxiety, and even shivers down the spine."

Infrasonics as weapon: High-intensity/low-frequency sound and infrasound are powerful forces, and governments have tested and used them as a weapon of war.


# (for the reference reading)
http://www.lowertheboom.org/trice/infrasound.htm

*Note: all pictures thankfully shared from various sources..

Sound Frequency

*sound science series # 2
Frequency: 
Frequency tells us how frequently an event occurs. Suppose you are beating a drum. How many times you are beating the drum per unit time is called the frequency of your beating the drum.
                The change in density from the maximum value to the minimum value, again to the maximum value, makes one complete oscillation.
                 The number of such oscillations per unit time is the frequency of the sound wave.

Sound Frequency:
  • Sound is a mechanical wave that is an oscillation of pressure transmitted through a solid, liquid, or gas, composed of frequencies within the range of hearing.
  • The number of cycles per unit of time is called the frequency
  • For convenience, frequency is most often measured in cycles per second (cps) or the interchangeable Hertz (Hz) (60 cps = 60 Hz), named after the 19th C. physicist. 1000 Hz is often referred to as 1 kHz (kilohertz) or simply '1k' in studio parlance.

Human range of sound frequency:  The range of human hearing in the young is approximately 20 Hz to 20 kHz—the higher number tends to decrease with age (as do many other things). It may be quite normal for a 60-year-old to hear a maximum of 16,000 Hz. 
                 For comparison, it is believed that many whales and dolphins can create and perceive sounds in the 175 kHz range. Bats use slightly lower frequencies for their echo-location system.

Above and below frequency: Frequencies above and below the range of human hearing are also commonly used in computer music studios. We refer to these ranges as:
<20 Hz
20-20kHz
>20kHz
sub-audio rate
audio rate
ultrasonic
Sub-audio signals are used as controls (since we can't hear them) in synthesis to produce effects like vibrato. The lowest 32' organ pipes also produce fundamental frequencies below our ability to hear them (the lowest, C four octaves below 'middle C' is 16.4 Hz) — we may sense the vibrations with our body or extrapolate the fundamental pitch from the higher audible frequencies (discussed below), but these super-low ranks are usually doubled with higher ranks which reinforce their partials.



Frequency and wavelength:  Frequency is directly related to wavelength, often represented by the Greek lambda (). The wavelength is the distance in space required to complete a full cycle of a frequency. The wavelength of a sound is the inverse of its frequency. The formula is:
wavelength ( ) = speed of sound/frequency
Example: A440 Hz (the frequency many orchestras tune to) in a dry, sea level, 68°F room would create a waveform that is ~2.5 ft. long (2.56 = 1128 (feet/sec) / 440). Be certain to measure the speed of sound and wavelength in the same units. Notice how if the speed of sound changed due to temperature, altitude, humidity or conducting medium, so too would the wavelength.
 Low frequency longer wavelength: As can be seen from the above formula, lower frequencies have longer wavelengths. We are able to hear lower frequencies around a corner because the longer wavelengths refract or bend more easily around objects than do shorter ones. Longer wavelengths are harder for us to directionally locate, which is why you can put your Surround Sound subwoofer most anywhere in a room except perhaps underneath you. At 20°C, sound waves in the human hearing spectrum have wavelengths from 0.0172 m (0.68 inches) to 17.2 meters (56.4 feet).

Doppler effect or Doppler shift: One particularly interesting frequency phenomenon is the Doppler effect or Doppler shift. You've no doubt seen movies where a police siren or train whistle seems to drop in pitch as it passes the listener. In actuality, the wavelength of sound waves from a moving source are compressed ahead of the source and expanded behind the source, creating a sensation of a higher and then lower frequency than is actually being produced by the source. This is the same phenomenon used by astronomers with light wavelengths to calculate the speed and distance of a receding star. The light wavelengths as stars move away are shifted toward the red end of the spectrum, hence the term red shift.

  
Formulas and equations for sound: c = λ × f        λ = c / f = c × T        f = c / λ
Physical value  symbol   unit formula
frequency f = 1/ Hz = 1/s  f = c / λ 
wavelength λ m λ = c / f
time period or
cycle duration
T = 1/ s T = λ / c
wave speed c m/s c = λ × f




Wave frequency in Hz = 1/s and wavelength in nm = 10−9 m