Information

Flicker fusion threshold for insect eyes

Flicker fusion threshold for insect eyes



We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

I searched on bio-numbers for flicker-fusion threshold of insect eyes in general and for flies in particular. Ruck, 1961 gives values for some flies. Is there any database maintained for these values? Is there any comparative study done for many flies/insects?


"The flicker fusion frequencies of six laboratory insects, and the response of the compound eye to mains fluorescent 'ripple'" (Miall, 1978) touches on flicker fusion thresholds for six different insect species commonly used in laboratory settings:

Locusta migratoria, Periplaneta americana, Saturnia pavonia, Antheraea pernyi, Glossina morsitans and Drosophila hydei.

I also found this paper: Potential Biological and Ecological Effects of Flickering Artificial Light (Boyles, 2014) that includes an exhaustive literature search of flicker-fusion thresholds and other related data for many species (including insects). If you're looking for an authoritative source, I'd say that this is it (as there doesn't seem to be a database with this information).


Small Birds Have Ultra-Rapid Vision

Why is it so hard to sneak up on a bunch of sparrows on the sidewalk? According to a new PLOS ONE study, small birds boast ultra-rapid vision that’s faster than that of any other vertebrate – and more than twice as fast as ours. 

The number of changes per second that an animal is capable of perceiving is called the temporal resolution of eyesight. Birds and other flying animals must accurately detect, identify, and track fast-moving objects (like tasty insects) all while maneuvering around branches and watching out for predators. For small, agile birds active in the daytime, high temporal resolution is a must, yet this hasn’t really been investigated.

Uppsala University’s Anders 󖷮n and colleagues studied the ability of three perching birds to resolve visual detail in time: blue tits (Cyanistes caeruleus) from southeastern Sweden, and both਌ollared flycatchers (Ficedula albicollis) and pied flycatchers (Ficedula hypoleuca) from the island of Öland in the Baltic Sea. Blue tits are insectivorous during breeding season, while flycatchers catch prey on the wing throughout the year.

Using food rewards, the team trained the birds to distinguish between a pair of LED-arrays: one that flickers and another that shines constantly. To determine the birds’ temporal resolution, the team gradually increased the flicker rate to the point at which the birds couldn’t tell the different lamps apart. This threshold is called the critical flicker fusion frequency (or CFF).


Flicker fusion frequencies for blue tits (left) and for collared flycatchers (right, closed diamonds) and pied flycatchers (right, open squares). J.E. Boström et al., PLOS ONE 2016

The flickering and constant lamps became indistinguishable at frequencies of up to 131 hertz (Hz) for blue tits and 141 Hz for collared flycatchers. The CFF of pied flycatchers reached as high as 146 Hz – or about 50 Hz above the highest rate for any other vertebrate. (Our CFF is about 60 Hz.)

Furthermore, their temporal acuity of vision is much higher than researchers predicted based on their size and metabolic rate – which suggests an evolutionary history of strong selection for temporal resolution. After all, at some point during the year, all three of these species live on the insects they catch at high speeds in dense forests. 

However, temporal resolution is different than spatial resolution, which measures the number of details per degree in the field of vision. There are trade-offs between different aspects of vision, and the vision of these small birds isn’t as sharp as that of predatory birds. �st vision may, in fact, be a more typical feature of birds in general than visual acuity,” 󖷮n says in a statement. “Only birds of prey seem to have the ability to see in extremely sharp focus.” Eagles have the sharpest vision known, and this excellent spatial resolution allows them to spot prey over long distances. 

�st-eyed” species like tits and flycatchers have compromised spatial acuity, but their vision is as fast as eagle eyes are sharp. 


Flicker Beyond Perception Limits - A Summary of Lesser Known Research Results

For the most part, light flicker has not been a topic of conversation when talking about lighting except, perhaps, during the 1980’s when research was being done on fluorescent and discharge lamps. Simple solutions and electronic ballasts seemed to solve the issues then and the subject was forgotten again. However, LED lighting has changed the situation once more and light flicker is a very popular topic of conversation now, raising the questions of: Why should we care about flicker at all? What does flicker fusion really mean? Is there a safe threshold for flicker? Should we care about flicker beyond the visible experience? Do we understand flicker and do we know enough about its effects?

Figure 1: Human vision has evolved and adapted to irregular and slow fluctuations in light intensity like what we get from sunlight filtering through the leaves of a tree or a campfire at night

A Short History of Flicker

Flicker in lighting is a relatively young phenomenon. Our biology has adapted to changes in the light provided by the sun, the moon, fire and candles. But all these changes are "slow" and "irregular" compared to the flicker created by technical effects. The same applies to the old thermal artificial light sources like gas light or incandescent electric lighting: The thermal inertia of the hot filament makes the changes in light output slow and with the mains frequency high enough, small and invisible. Only incandescent lamps on the 16.67 Hz circuits the railway system used in the early days of electrification in some areas had some visible flicker.

Figure 2: Even incandescent and halogen lamps flicker but virtually no flicker is visible due to them being an artificial light source at a grid frequency of 50-60 Hz (Credit: Naomi J. Miller, Brad Lehman DoE & Pacific Northwest National Laboratory)

The first time flicker was an issue was with movies. The main question was how continuous movement could be created by a series of still frames shown rapidly one after the other. There are two aspects to this: How many frames per second are needed for a smooth impression, and how is it possible to avoid the flicker impression created by the short dark interval needed for the transport of the celluloid between the frames. Experiments showed that smooth movement impressions can be achieved with 15-18 frames per second, and a professional smooth impression at 24 frames per second (the number of frames needed directly affects the cost of a movie, therefore low frame rates are favored). But when those movies were shown to a wider community, some of the audience experienced severe health issues. When investigated further it was clear that the health issues were not connected to the content of the movies but rather the 18-24 Hz flicker was identified as the source of the problem. Today this is known as photosensitive epilepsy, a severe condition caused by flicker mainly in the range between 8 and 30 Hz, and decreasing above. To avoid the 8-30 Hz frequency range without tripling the cost of the films the movie industry tripled the shutter frequency and showed the same frame three times before moving to the next. Now they operate with flicker in the 60-80 Hz range which has been proved to circumvent photosensitive epilepsy. It also made the overall flicker impression that was experienced when not directly staring at the screen much better. Experiments showed that the flicker impression for most of the audience went away completely above approx. 120 Hz.

Figure 3: Photos from CRT TV monitors show the 25/50 Hz flicker as banding. An effect that appears very similar when taking pictures or videos using flickering LED lamps

Flicker and Television

The development of the TV screen and the fluorescent lamp right after World-War-Two brought flicker back into focus and new attempts to understand flicker better were made.

To keep transmission bandwidth in a reasonable range, the TV screens could handle just 25 (Europe) or 30 (US) frames per second at a reasonable number of lines. To stay out of the flicker frequency range and get into a 50/60 Hz frame rate on the screen, broadcasting was organized in half-frames with interlacing lines: A really tricky way to get out of the flicker frequencies that caused photosensitive epilepsy. But in any case TV sets were known for the visual flicker impression they caused and in some areas they were called a "flicker box" (german: "Flimmerkiste").

The TV exploited the fusion issue to its maximum: It was actually a single spot that rapidly moved line by line over the screen, relying on the eyes to fuse it into a full image.

The flicker integration by the eye was believed to be caused by the bio-chemical process of the detection that has a substantial relaxation, the experience of a full screen view written by a fast moving single spot, supported this believe.

Figure 4: With the introduction of FL lamps and the use of magnetic ballasts, flicker issues became apparent and critical in lighting. The issues were solved by using multiple lamps operated on differently shifted phases (Credit: Naomi J. Miller, Brad Lehman DoE & Pacific Northwest National Laboratory)

For photographers, the TV added a challenge that had not been known up until then: Making a "screenshot" with a reasonably short exposure delivered a single bright dot with a short tail and with longer exposures, a section of the screen stayed grey (there were no black screens in those days) instead of a nice picture. Also, TV or movie cam shots from TV screens ended up with vertical moving bars caused by interference if no special precautions were made.

Figure 5: Various flicker frequencies have different effects on health or visual and cognitive performance

Flicker and Lighting

Like the TV exposed us to flicker in our living rooms, the fluorescent lamp took it to the work place. The flicker issues caused by the early fluorescent lamp were not experienced as an issue as in most applications it was substantially reduced by having multiple lamps per luminaire on differently shifted phases, substantially reducing the depth of the dark phase and adding to the flicker frequency which stayed well above 100 Hz.

Finally, in 3-phase connected environments with multiple lamps per luminaire and each lamp on either an inductive or capacitive ballast with the corresponding phase-shift, opaque white covered luminaires and little to no direction, the residual flicker in larger rooms (like production sports spaces) ended up above 600 Hz with a flicker depth of << 25% of the intensity! This was creating a practical kind of "no flicker" experience. The setting also allowed for TV shots without disturbing stroboscopic effects.

With the upcoming higher efficiency of the now thinner fluorescent lamps, the use of phosphors with lower relaxation time, and the highly directional anodized louvers where each lamp illuminated a specific area, the lighting industry started creating noticeable flicker without noticing it. Technically, a single fluorescent lamp creates flicker of 100 /120 Hz, and a full dark phase that covers some 25% of the time (the dark time depends very much on the line voltage applied).

With the flicker of discharge lamps in traffic lighting a new aspect of flicker was on the edge of being noticed: the stroboscopic effects of lights that pass by at higher angular speed.

In the mid 1980’s modern offices were found to make people sick in the long run (sick days rose, the number of respiratory infections increased, and there were a higher number of eye strain issues), and all kinds of root causes were thoroughly researched. Besides air flow issues, including wrong humidity management and exposure to germs by those early ventilation and heating systems, the "sick building syndrome" identified eye strain as being caused by long (multiple months/working hours) exposure to flicker at 100 Hz as being part of the issue [1].

This result was not smiled upon. In fact, many believed that the cause was poor lighting design, improper computer screens and the like. It was difficult to believe that there could be biological health effects when no flicker features were visible at all, especially with literature expounding the fact that the retinal cell was not able to follow that flicker frequency.

All the experience with older installations were in contradiction to that research, and in addition the lighting industry was accused of using influential arguments to push sales of the new "electronic ballast" technology that was able to dramatically reduce flicker. It is the destiny of most long-term-exposition-effects that they will be ignored for the long-term.

Bio-Medical Issues of Flicker

In the meantime, bio-medical research revealed additional features of the eye: besides the well visible movement that is caused by scanning the environment (looking around), a continuous and rapid but small movement was detected: The position of the eye jitters minimally around the actual focus of sight in all directions at an astonishing speed: It jitters at a rate of 80-100 position changes per second. (Individually different, and somehow rising with age).

This led to wide speculation and hefty discussions, especially as the movement was so fast that it outdid the fusion frequency, creating a contradiction: Why should the eye move faster than the receptors are able to create signals? At this time it was believed that the fusion frequency is caused by the limited response time of the retina cell. The full evolutionary reason for this effect, now called “ocular micro tremor (OMT)”, and the full bundle of purposes this bio-medical flicker generator possibly serves is still an object of research today.

In the meantime it became clear that the fusion frequency is caused by the brain handling the data stream, the eye apparatus has been shown to be able follow optical signals up to at least 200 Hz.

The linear fluorescent lamp moved away from flicker with the T5 lamp that was used on electronic circuits only. Quality vendors showed "residual ripple" figures in the single digit or below percentage range as a quality feature of their product. Contrary to that, the compact fluorescent lamps brought flicker back to lighting, especially to the home environment, with the deployment (and finally legal push) of the "energy saver" replacement lamps that were also for private use, where designers cared more about the cost and less about the light quality achieved. Most of them feature substantial flicker at double the mains frequency.

With the high-pressure discharge lamps used in street lighting, flicker was reduced as the lamp also emits a substantial portion of light during the change of polarity.

The flicker of TV sets and computer screens was attacked by the quality industry when it was understood that it is connected to eye strain related health issues. TFT screen technology, fast micro- and display controllers together with cheap RAM technology allowed for low flicker at higher frequencies with the screens (e.g. "100 Hz TV" was a quality sales argument for a while).

Figure 6: With LEDs inertia in the light emission, poorly designed drivers (left) and PWM dimming (right: e.g. on approx. 80% level) brought flicker back to lighting (Credit: Naomi J. Miller, Brad Lehman DoE & Pacific Northwest National Laboratory)

The Return of the Flicker

When LED's first came into TV sets they were usually driven by some internal circuitry with no need for dimming, and therefore no flicker was applied. But when the LED's came into lighting, dimming and color change (by dimming the color components relatively) became easily accessible, and the PWM technology mainly used to dim brought flicker back into the main area of lighting. The LED follows the electric current applied without any inertia in the light emission. As higher PWM frequencies cause higher cost and losses, flicker at comparatively low frequencies was back, the "flicker knowledge" of the early 1990’s and the drive to avoid flicker above the photopic epilepsy range in the lighting environment was not accessible (or less prioritized) to the new generation of engineers providing the driver circuits for a new technology. This went unnoticed until the first camera shots of a PWM lit environment caused troubles, leading (history repeats itself) to special equipment for medical and sports lighting and areas where high end camera application (especially moving camera application) was a requirement.

Now, again, the flicker discussion ended up in "120 Hz is acceptable", as, for example, George Zisses showed in his excellent article in LpR 53, Jan/Feb 2016, based on published, commonly accepted and applicable standards. Doing so gets the flicker out of the well perceivable range for most of the individuals. However, the main research and argumentation, not just in this article, is focused on visible perception, ignoring the known facts about OMT and the eyestrain health issues connected to long-term exposure to flickering light.

Most of the used material is based on the implicit assumption that what cannot be perceived should not cause any harm. Science needed to readjust more than once from an assumption of this kind, just look at x-rays and radioactivity.

With lighting, the assumption made by the standards sounds correct in the first place, but this is possibly not sufficient for responsible persons and organizations:

  • There is no proof available that flicker above the said 120 Hz is harmless
  • Only a little research is available concerning long term effects, and what is available was performed specifically with fluorescent lights, that have a totally different, and possibly less harmful, flicker characteristics than today's LED lighting uses

Poor Research Coverage and Poor Results Reception

Regarding the poor research coverage of long term effects, there is one prominent investigation concerning mid to longer term effects available, that was part of the sick building research in the late 1980’s, conducted by Wilkinson et al. It was focused on 100 Hz modulation [1]. It showed that a switch from a longer exposure (multiple month) with substantially modulated light (60% modulation) to low modulated light (6% @100 Hz) reduces headaches and eyestrain immediately (within a few weeks). These results are statistically significant. Most of the lighting industry ignored the results, as the opposite effect (increased eyestrain and headaches), could not be shown within four weeks of exposure that the research campaign allowed for.

On top of the limited knowledge that eases the notion "there is no effect where there is no proof of an effect", there are some hints that flicker with higher frequencies has immediate effects on some embedded mechanisms of our eye.

Hints of Flicker Effects on Visual Performance:

  • The focusing of the eye changes slightly with the frequency applied up to 300 Hz [4]
  • The ability to separate fine structures is reduced with flicker up to above 300 [3]
  • The visual nerve follows intensity frequencies applied up to 200 Hz
  • Transitional effects of flicker have been claimed to be detectable up to 800 Hz

This seems to be very high given the bio-chemical nature of the sensors, and the relatively low fusion frequency of our visual system. This raises the question of if there is any plausibility or understanding how a non-visible modulation of higher frequencies may interfere with our perception system.

Therefore it is necessary to understand our visual system that is following a layered approach. The retina cells deliver the actual "reading" to a first layer of neuronal structure, the results are delivered to a second layer, and the overall result is then passed to the brain. Now doing a little speculative work, one could expect that OMT serves a minimum of two purposes.

Two possible purposes of OMT:

  • Cross-adjusting the attenuation of the cells by scanning over the same spatial position with two adjacent cells
  • Adding to the resolution by covering the space between the cells, e.g. scanning for the exact position of a transition

The trouble is that both rely on a short-term constant intensity of the light source. The relative calibration fails if the reference source changes when switching between the cells, and the detected sharp transition during the movement is superimposed by a light source that does sharp transitions by purpose. So there is a possible conflict, but it is complex in nature and the research on this has not yet developed very far.

Searching the Safe Side

The simple question of where the edge frequency is, leads to a difficult and multidimensional answer, and possibly needs to be split further regarding the different types of retina cells.

Basic Conditions:

  • The edge frequency of the retina cell could well be as high as 800 Hz or slightly above (that is the maximum that has been claimed to be visible in experiments)
  • The receptors in our retina never work alone they are always networked within a complex neuronal setting. So maybe the actual edge frequency is higher but basic network layer stops faster signals from propagating to the optic nerve: the edge frequency of the signals delivered to the optic nerve seems to be in the 200 Hz range, but most likely all effects above 120 Hz are locked by the underlying network from being propagated to the optic nerve
  • The edge frequency for the visual cortex of the brain seems to be around 25 Hz, called fusion frequency
  • The edge frequency for other parts of the brain affected by the signals from the optic nerve seems to be at or below 30 Hz, the threshold for photosensitive epilepsy

This is again proof that the eye is a fantastic optical instrument, and uses various technologies to enhance the view (some of these technologies can also backfire when it comes to visual illusions, but this is a different story).

Figure 7: More stringent flicker regulations are already in discussion. At Lightfair 2015, Naomi J. Miller and Brad Lehman showed in their lecture "FLICKER: Understanding the New IEEE Recommended Practice" some applications that are recognized to being very critical. However, research results that allow to determine clear limits for being on the safe side are still missing

Two example tasks the eye performs in an astonishing way:

  • Humans are able to resolve structures that are in (and seem to be somehow below) the range of the distance of the receptor cells: The performance is better than the pixilation caused by the receptor cell array imposes. This could well be related to enhancement using the micro-tremor
  • Humans are able to detect structures that are based on very low luminosity differences. This is possible only if neighborhood receptor cells are exactly calibrated (and continuously recalibrated) against each other, a difficult task for biochemical photoreceptors, and also a difficult task for technical cameras: Luminosity transition structures on the edge of visibility need really advanced apparatus to be able to photograph them

Both achievements need (relatively) steady lighting situations during the scan. Changing light affects the scan results. The higher the modulation, the more they are jeopardized.

Humans are used to "looking closely" to perform specific tasks, e.g. resolving fine structures or close-to-nothing luminosity transitions, and that technically translates to concentrating on the point, or in other words, allowing for a longer integration time of the sensor results to get rid of sensor noise, etc. But modulated light, and especially deep modulation will ruin the attempts to get reasonable results out of the OMT enhancement, and may well be a source of stress to the eye and to the brain, especially when applied for a longer time or with difficult visual tasks.

Research in the Project Prakash on blind persons that regained their sight as adults [2] showed, that the visual apparatus and object recognition ability also adopts to a quite normal view after a while as adults, but does not gain some of the more advanced abilities. This could point to the fact, that the complex analysis ability shown above is a trained one, acquired during childhood.

Conclusions & Prospects

Short term exposure to higher frequency flicker seems to be no trouble for adults, as long as no advanced visual tasks need to be performed. There are no suggestions that frequencies above 800 Hz affect humans, but there is enough evidence that flicker up to 400 Hz is not harmless with long-term exposure. The results suggest strongly that there are negative effects like stress or wear-out to the visual apparatus. HD movie Cameras (e.g. as found in high end smartphones) show severe interference issues with flickering light at lower frequencies, very much like the old TV screens had. Pets and especially birds may suffer from flicker that is not visible to humans, but this is a different issue.

While research cannot give clear evidence about safe lower thresholds, longer term exposure to higher frequency flicker should be avoided, and especially wherever (younger) children stay for longer periods of time to make sure the possibility of interference with their later visual abilities is minimized. To stay safe, responsible manufacturers should avoid flicker below at least 400Hz for lighting that is intended to be used in offices, working zones, baby and children’s rooms, kindergarten installations and screen illumination of children’s toys.

The existing research is poor and many aspects of flicker are still not clear today, such as the influence of the flicker shape. More research is definitely needed to understand where the safe zone really is.

References:
[1] Wilkins A.J., I.M. Nimmo-Smith, A. Slater and L. Bedocs: Fluorescent lighting, headaches and eye-strain, Lighting Research and Technology, 21(1), 11-18, 1989

[2] Pawan Sinha: Es werde Licht, Spektrum der Wissenschaft 18.7.2014 (partly based on Held, R. et al: The Newly Sighted Fail to Match Seen with Felt, Nature Neuroscience 14, p 551-553, 2011)

[3] Lindner, Heinrich: Untersuchungen zur zeitlichen Gleichmässigkeit der Beleuchtung unter besonderer Berücksichtigung von Lichtwelligkeit, Flimmerempfindlichkeit und Sehbeschwerden bei Beleuchtung mit Gasentladungslampen, 1989, Thesis, TU Ilmenau, Germany

[4] Jaschinski, W.: Belastungen des Sehorgans bei Bildschirmarbeit aus physiologischer Sicht. (1996) Optometrie, 2, S. 60-67


Biology in art: Genetic detectives ID microbes suspected of slowly ruining humanity's treasures

DNA science may help restore, preserve historic works, unmask counterfeits The trait elite baseball hitters share with Leonardo da Vinci: A 'quick eye' with higher 'frames per second' -- a function of training, genetics, or both?

Leonardo da Vinci DNA Project

IMAGE: Leonardo da Vinci noted that the fore and hind wings of a dragonfly are out of phase -- verified centuries later by slow motion photography. Thaler suggests further study to compare. view more

A new study of the microbial settlers on old paintings, sculptures, and other forms of art charts a potential path for preserving, restoring, and confirming the geographic origin of some of humanity's greatest treasures.

Genetics scientists with the J. Craig Venter Institute (JCVI), collaborating with the Leonardo da Vinci DNA Project and supported by the Richard Lounsbery Foundation, say identifying and managing communities of microbes on art may offer museums and collectors a new way to stem the deterioration of priceless possessions, and to unmask counterfeits in the $60 billion a year art market.

Manolito G. Torralba, Claire Kuelbs, Kelvin Jens Moncera, and Karen E. Nelson of the JCVI, La Jolla, California, and Rhonda Roby of the Alameda California County Sheriff's Office Crime Laboratory, used small, dry polyester swabs to gently collect microbes from centuries-old, Renaissance-style art in a private collector's home in Florence, Italy. Their findings are published in the journal Microbial Ecology .

The genetic detectives caution that additional time and research are needed to formally convict microbes as a culprit in artwork decay but consider their most interesting find to be "oxidase positive" microbes primarily on painted wood and canvas surfaces.

These species can dine on organic and inorganic compounds often found in paints, in glue, and in the cellulose in paper, canvas, and wood. Using oxygen for energy production, they can produce water or hydrogen peroxide, a chemical used in disinfectants and bleaches.

"Such byproducts are likely to influence the presence of mold and the overall rate of deterioration," the paper says.

"Though prior studies have attempted to characterize the microbial composition associated with artwork decay, our results summarize the first large scale genomics-based study to understand the microbial communities associated with aging artwork."

The study builds on an earlier one in which the authors compared hairs collected from people in the Washington D.C., and San Diego, CA. areas, finding that microbial signatures and patterns are geographically distinguishable.

In the art world context, studying microbes clinging to the surface of a work of art may help confirm its geographic origin and authenticity or identify counterfeits.

Lead author Manolito G. Torralba notes that, as art's value continues to climb, preservation is increasingly important to museums and collectors alike, and typically involves mostly the monitoring and adjusting of lighting, heat, and moisture.

Adding genomics science to these efforts offers advantages of "immense potential."

The study says microbial populations "were easily discernible between the different types of substrates sampled," with those on stone and marble art more diverse than wood and canvas. This is "likely due to the porous nature of stone and marble harboring additional organisms and potentially moisture and nutrients, along with the likelihood of biofilm formation."

As well, microbial diversity on paintings is likely lower because few organisms can metabolize the meagre nutrients offered by oil-based paint.

"Though our sample size is low, the novelty of our study has provided the art and scientific communities with evidence that microbial signatures are capable of differentiating artwork according to their substrate," the paper says.

"Future studies would benefit from working with samples whose authorship, ownership, and care are well-documented, although documentation about care of works of art (e.g., whether and how they were cleaned) seems rare before the mid-twentieth century."

"Of particular interest would be the presence and activity of oil-degrading enzymes. Such approaches will lead to fully understanding which organism(s) are responsible for the rapid decay of artwork while potentially using this information to target these organisms to prevent degradation."

"Focusing on reducing the abundance of such destructive organisms has great potential in preserving and restoring important pieces of human history."

Biology in Art

The paper was supported by the US-based Richard Lounsbery Foundation as part of its "biology in art" research theme, which has also included seed funding efforts to obtain and sequence the genome of Leonardo da Vinci.

The Leonardo da Vinci DNA Project involves scientists in France (where Leonardo lived during his final years and was buried), Italy (where his father and other relatives were buried, and descendants of his half-brothers still live), Spain (whose National Library holds 700 pages of his notebooks), and the US (where forensic DNA skills flourish).

The Leonardo project has convened molecular biologists, population geneticists, microbiologists, forensic experts, and physicians working together with other natural scientists and with genealogists, historians, artists, and curators to discover and decode previously inaccessible knowledge and to preserve cultural heritage.

Related news release: Leonardo da Vinci's DNA: Experts unite to shine modern light on a Renaissance master http://bit. ly/ 2FG4jJu

Measuring Leonardo da Vinci's "quick eye" 500 years later.

Could he have played major-league baseball?

Famous art historians and biographers such as Sir Kenneth Clark and Walter Isaacson have written about Leonardo da Vinci's "quick eye" because of the way he accurately captured fleeting expressions, wings during bird flight, and patterns in swirling water. But until now no one had tried to put a number on this aspect of Leonardo's extraordinary visual acuity.

David S. Thaler of the University of Basel, and a guest investigator in the Program for the Human Environment at The Rockefeller University, does, allowing comparison of Leonardo with modern measures. Leonardo fares quite well.

Thaler's estimate hinges on Leonardo's observation that the fore and hind wings of a dragonfly are out of phase -- not verified until centuries later by slow motion photography (see e.g. https:/ / youtu. be/ Lw2dfjYENNE?t= 44).

To quote Isaacson's translation of Leonardo's notebook: "The dragonfly flies with four wings, and when those in front are raised those behind are lowered."

Thaler challenged himself and friends to try seeing if that's true, but they all saw only blurs.

High-speed camera studies by others show the fore and hind wingbeats of dragonflies vary by 20 to 10 milliseconds -- one fiftieth to one hundredth of a second -- beyond average human perception.

Thaler notes that "flicker fusion frequency" (FFF) -- akin to a motion picture's frames per second -- is used to quantify and measure "temporal acuity" in human vision.

When frames per second exceed the number of frames the viewer can perceive individually, the brain constructs the illusion of continuous movement. The average person's FFF is between 20 to 40 frames per second current motion pictures present 48 or 72 frames per second.

To accurately see the angle between dragonfly wings would require temporal acuity in the range of 50 to 100 frames per second.

Thaler believes genetics will account for variations in FFF among different species, which range from a low of 12 in some nocturnal insects to over 300 in Fire Beetles. We simply do not know what accounts for human variation. Training and genetics may both play important roles.

"Perhaps the clearest contemporary case for a fast flicker fusion frequency in humans is in American baseball, because it is said that elite batters can see the seams on a pitched baseball," even when rotating 30 to 50 times per second with two or four seams facing the batter. A batter would need Leonardo-esque FFF to spot the seams on most inbound baseballs.

Thaler suggests further study to compare the genome of individuals and species with unusually high FFF, including, if possible, Leonardo's DNA.

Flicker fusion for focus, attention, and affection

In a companion paper, Thaler describes how Leonardo used psychophysics that would only be understood centuries later -- and about which a lot remains to be learned today -- to communicate deep beauty and emotion.

Leonardo was master of a technique known as sfumato (the word derived from the Italian sfumare, "to tone down" or "to evaporate like smoke"), which describes a subtle blur of edges and blending of colors without sharp focus or distinct lines.

Leonardo expert Martin Kemp has noted that Leonardo's sfumato sometimes involves a distance dependence which is akin to the focal plane of a camera. Yet, at other times, features at the same distance have selective sfumato so simple plane of focus is not the whole answer.

Thaler suggests that Leonardo achieved selective soft focus in portraits by painting in overcast or evening light, where the eyes' pupils enlarge to let in more light but have a narrow plane of sharp focus.

To quote Leonardo's notebook, under the heading "Selecting the light which gives most grace to faces": "In the evening and when the weather is dull, what softness and delicacy you may perceive in the faces of men and women." In dim light pupils enlarge to let in more light but their depth of field decreases.

By measuring the size of the portrait's pupils, Thaler inferred Leonardo's depth of focus. He says Leonardo likely sensed this effect, perhaps unconsciously in the realm of his artistic sensibility. The pupil / aperture effect on depth of focus wasn't explained until the mid-1800s, centuries after Leonardo's birth in Vinci, Italy in 1452.

What about selective focus at equal distance? In this case Leonardo may have taken advantage of the fovea, the small area on the back of the eye where detail is sharpest.

Most of us move our eyes around and because of our slower flicker fusion frequency we construct a single 3D image of the world by jamming together many partially in-focus images. Leonardo realized and "froze" separate snapshots with which we construct ordinary perception.

Says Thaler: "We study Leonardo not only to learn about him but to learn about ourselves and further human potential."

Thaler's papers (at https://bit.ly/2WZ2cwo and https://bit.ly/2ZBj7Hi) evolved from talks at meetings of the Leonardo da Vinci DNA Project in Italy (2018), Spain and France (2019).

They form part of a collection of papers presented at a recent colloquium in Amboise, France, now being readied for publication in a book: Actes du Colloque International d'Amboise: Leonardo de Vinci, Anatomiste. Pionnier de l'Anatomie comparée, de la Biomécanique, de la Bionique et de la Physiognomonie. Edited by Henry de Lumley, President, Institute of Human Paleontology, Paris, and originally planned for release in late spring, 2020, publication was delayed by the global virus pandemic but should be available at CNRS Editions in the second half of the summer.

Other papers in the collection cover a range of topics, including how Leonardo used his knowledge of anatomy, gained by performing autopsies on dozens of cadavers, to achieve Mona Lisa's enigmatic smile.

Leonardo also used it to exact revenge on academics and scientists who ridiculed him for lacking a classical education, sketching them with absurdly deformed faces to resemble birds, dogs, or goats.

De Lumley earlier co-authored a 72-page monograph for the Leonardo DNA Project: "Leonardo da Vinci: Pioneer of comparative anatomy, biomechanics and physiognomy.".

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.


Flicker fusion threshold for insect eyes - Biology

The pictures above illustrate three basic models of the compound eye . Adult insects have one pair of
compound eyes. As the name suggests, the compound eye is made up of a series of 'eyes' compounded
together - that is they have many lenses. Each lens is part of a prismatic unit called an ommatidium (plural
ommatidia). Each ommatidium appears on the surface as a single polygon or dome, called a facet . The
models above each show 60 such facets from 60 ommatidia arranged in 6 rows of ten. The facets may be
hexagonal (6-sided), squarish, circular or hemispherical. Hexagonal packing covers the surface of the eye
with the highest number of facets. However, eyes with hexagonal facets will have also have some
pentagonal (5-sided) or quadrilateral (4-sided) facets since hexagons cannot completely pack a spherical
surface without leaving gaps, whilst a combination of hexagons and pentagons can. If you were to use a
Zales promo code to get a large, expertly cut loose diamond, you would see similar geometric shapes cut
into the gemstone and it would give you a little bit of an idea of what an ommatidium would look like.

These geometries are important, because to some extent each ommatidium and its corresponding facet
behave as a single optical unit, and the more such units that fit into a given area the more resolution
(detail) the eye can see. The more ommatidia that add to the image, the more points or 'pixels' that go to
make up the final image. In vertebrates, like humans, the arrangement is quite different - a single 'facet'
and a single lens covers a retina of many sensory cells, where each sensory cell contributes one point or
'pixel' to the final image, so the retinal sensory cells are the optical units as far as resolution of the final
image is concerned. In insects, however, each facet encloses one ommatidium containing just 7 to 11
sensory cells. In the human retina, in its most sensitive region (known as the fovea) some 175 000
sensory cells per square millimetre are packed into an hexagonal array. In the insect, the compound eye
contains anything from about half a dozen ommatidia to 30 000 or more. For example, the wingless
silverfish have only a few ommatidia, or none at all, whilst the dragonfly has about 30 000 ommatidia in
each compound eye. Dragonflies catch prey on the wing and so they need better visual resolution, which
is why they have such large compound eyes and so many ommatidia. Often the density of the facets is
greatest in certain parts of the eye - those parts that are most often used for more accurate vision.
Similarly, in humans, the density of sensory cells in the retina declines away from the central fovea toward
the edges of the visual field, which is why the edge of your visual field is so fuzzy. For the same reason,
one can often sex flies by the size of their compound eyes - male flies have larger eyes that almost meet
in the middle of the face, since they need keener vision to help them spot females!

Insect eyes are one of the most prominent features of many insect heads and they vary tremendously in
colour, whether an insect is camouflaged or coloured to advertise itself as unpleasant to potential
predators or as attractive to potential mates, the colour and pattern of the eyes is very important!

This type of ommatidium is from a type of compound eye called an apposition eye and is characteristic
of diurnal (day-active) insects. Nocturnal insects have a modification to this plan, called the
superposition eye, which reduces spatial resolution but increases sensitivity to dim light. We will look at
these differences in more detail later.

Notice that light enters this ommatidium from above, through the corneal facet. The cornea and
crystalline cone together focus the light onto the rhabdom. This dual lens system forms the
light-focusing or dioptric apparatus of the ommatidium. Vertebrate, including human, eyes are similar in
this respect - the outer cornea and the lens together with the various liquids or gels in the eye focus the
light.

There are usually seven or eight sensory cells (also called retinula cells, one of which is usually highly
modified) in each ommatidium, these surround the optic rod or rhabdom , which is a cylinder created by
a multitude of interdigitating finger-like processes (called microvilli) that extend from the sensory cells
and meet in the middle. This rhabdom is the actual light detector and contains high concentrations of
light-sensitive pigments called rhodopsins (rhodopsins require vitamin A for their manufacture). The iris
cells are also called pigment cells, since they are heavily pigmented to stop stray light entering the
ommatidium through the sides (such as light that has entered through neighbouring ommatidia).

The ommatidium is an energy transducer - light energy absorbed by rhodopsins in the rhabdom are
converted into electrical (strictly electrochemical) energy and the sensory cells send electrical signals
that encode the light stimulus to the optic lobes of the brain. There are a pair of optic lobes (or optic
ganglia) one innervating each compound eye.

How does the compound eye of an insect compare to the eye of a human?

First of all let's look at visual acuity. Visual acuity is the actual spatial resolution that the eye can see,
and can be measured, for example, by using a grating of alternating black and white vertical lines and
seeing how close together the striped must be when viewed at a set distance before the stripes merge
and can no longer be distinguished as a series of lines. Visual acuity is determined in part by the
maximum possible resolution of the eye, as determined by the spatial density of sensors - retinal cells
or ommatidia which determines the number of points that make up the final image. It is also determined
by the optical limits and imperfections of the lens system. Even if the optics are as perfect as possible,
light will still diffract (spread-out) by some degree as it passes through th lenses, such that a single
point of light coming from an object becomes an extended or blurred spot of light in the image. Insect
eyes are limited by this diffraction.

Ommatidial facets are very small, about 10 micrometres (or one hundredth of a millimetre) in diameter.
This allows many points to compose the final image. The type of eye we have considered so far is
typical of diurnal insects, such as flies (Diptera), wasps and bees (Hymenoptera), many beetles
(Coleoptera), dragonflies and damselflies (Odonata) and day-flying butterflies (Lepidoptera). This type
of eye is adapted for bright light and is called an apposition compound eye because the final image is
made up of discrete points, each point formed by a single ommatidium, placed side-by-side (apposed
to one another) to form an image which is a mosaic of points. This does not mean, however, that the
insect sees a disjointed image made up of points, nor that the insect sees multiple images, since the
brain integrates these images and so what the eye 'sees' and what the insect 'perceives' are two
different albeit related things.

In the human eye, the retina is made up of an hexagonal array of sensor cells, called rods and cones,
and the distance between adjacent sensors is only about 2 micrometres and so these sensors can
pack together to give a density (of about 175 000 per square millimetre) which is about 25 times
higher than the ommatidial density of the insect eye. This allows the human eye to detect greater
spatial detail or resolution in the object to give a more detailed image. The question is, why don't
insects have ommatidia that are only 2 micrometres in diameter? The answer is because diffraction
limits the performance of such small lenses. In fact the rhabdom acts like a wave-guide when it's less
than about 5-10 micrometres in diameter. A wave-guide is to light what a hollow tube is to air blown
through it - when you blow into a flute, the vibrations are confined in a narrow space and all but certain
frequencies of vibration cancel out and we get what we call standing waves which give the fundamental
and harmonic tones of the note played. In a light wave-guide a similar situation exists - light waves
vibrate only in certain frequencies that are confined (or guided along) the optical tube. As it happens,
when this occurs in a narrow rhabdom, it prevents the light from being focused into a point smaller
than about 5 micrometres in diameter. In short, such small optics are of no advantage as they are
unable to focus the light into small enough points. Lenses only work above a certain size.

Thus, the visual acuity of the compound eye is about one hundred times less than that of the human
eye due to design constraints. The only possible way to overcome this is to make the compound eye
larger. In fact an estimate can be calculated to show that the compound eye would need a diameter of
about 20 metres to see as much spatial detail as the human eye, which is about the size of a house!
( Click here to see this calculation ). Dragonflies have among the largest eyes in the insect world, with
compound eyes several millimetres in diameter since they require quite sharp vision in order to catch
prey insects on the wing. Indeed, they can do this better than we could despite having less sharp
vision.

Contrast is closely related to visual acuity in the sense of spatial resolution, but more exactly contrast
is the ability to distinguish similar shades of the same colour, say shades of grey, and is important in
defining the edges of objects. You can see the words on this page because the black type contrasts
strongly with the white page. However, this is much harder to read since the contrast is less . The dark
grey text in the line below has even less contrast with the black background:

For the same sort of reasons contrast falls as light levels fall - in dim light the contrast is less. To
overcome this, in dim light an optical system needs to collect more light. An astronomer's telescope
looking at dim galaxies far away would benefit by having a large diameter aperture (the aperture is the
opening at the end which directs light into the tube of the telescope) to gather in more of the dim light
coming from such far away objects. Alternatively, one can collect the light for longer periods of time -
an astronomer might leave their telescope trained on the same patch of sky for minutes or hours,
rotating the telescope to compensate for rotation of the Earth. Clearly, there is a limit to the length of
time that an animal's eye can gather light from the same object, since the animal world is dynamic and
if you don't see the predator quickly you are more likely to get eaten! Insect's are limited by the small
apertures of each ommatidium in the compound eye. Indeed the diurnal apposition type of eye can
only detect weak contrast in bright daylight, but can cope reasonably well in room-light, but these
insects stop flying if the light levels drop to below room-light, such as in Moonlight or starlight. It is
possible to calculate the number of photons entering each ommatidium each second (click here to see
the calculation). The insect eye collects light for about 0.1 second to form a given image, and it needs
to receive about one million photons (photons are particles or the smallest possible packets of light) in
this time period to maximise contrast and this is only achieved, in the apposition eye, in broad daylight.
The absolute minimum threshold for vision is about the same in insects and humans at about 1 photon
every 40 minutes, which is extremely sensitive! However, only very strong contrast could be detected
in such low light levels.

Humans are diurnal, and although they have a degree of night vision, human eyes are not particularly
good in twilight or Moonlight or starlight. Horses have adaptations that enable them to see better in
twilight than can humans, which is handy to spot predators working at odd hours!

Many insects are crepuscular (meaning that they are most active in twilight). Moths and beetles in
particular, but also some flies, some dragonflies and some butterflies fly at light levels comparable to
Moonlight. These insects may have apposition eyes with wider facets and they may collect light over a
longer time period (up to about 0.5 seconds?) before integrating the signal to produce the final image.
Moths and beetles, in particular, may have a different type of compound eye, called the superposition
eye . In this type of eye the iris cells only ensheath the top part of the ommatidium, around the facet
and cone. A translucent light-conducting rod connects the bottom of the crystalline cone to the
rhabdom which is now far beneath the cone. This is illustrated below:

Above: the rhabdom light detector can receive light from neighbouring ommatidia in a superposition eye.

In this way, light from as many as 30 ommatidia may overlap and focus onto the same point. Clearly this
intensifies the image, improving sensitivity in dim light. However, the trade-off is that visual acuity is
reduced - 30 or so ommatidia are now working as one large ommatidium, so the final image will be made
from 30 times fewer points and spatial resolution will be reduced. The human eye makes a similar
trade-off - in dim light the eye relies upon sensors that combine their signals neurologically. Some
insects do this too. What we have described so far is known as optical superposition, since the light
itself is added together or superposed (literally light is placed on top of light). However, some insects
have optical apposition eyes that superimpose their signals neurologically, so-called neural
superposition . In neural superposition, it is the electrical signals from neighbouring ommatidia that are
added together by the nervous system, even though the light illuminates separate ommatidia by
apposition.

The eyes of most insects are capable of adapting to light and dark. In diurnal insects with apposition
eyes, the pigment in the iris cells moves upward in the dark, exposing the rhabdom to light from
neighbouring ommatidia - effectively turning the eye from an optical apposition eye into an optical
superposition eye. Neural changes can further increase the sensitivity of dark-adapted insect vision.
Nocturnal insects show a similar pattern, but with greater ranges in sensitivity, with the eye becoming
about 1000 times more sensitive to light in the dark. Thus, though insects may have the geometry of
apposition or superposition type eyes, most can change in functionality to some degree. Clearly,
however, the range of light-intensities which best suites each type of eye is restricted and best suited to
the life habits of the species. Humans similarly show dark adaptation, which occurs quickly over the first
ten minutes, then slows and takes some 30 minutes to complete. When you first switch off the light in a
room at night, you will find that at first you cannot see anything much, but after a few moments objects
will become clearer. Wait for half an hour or wake up in the middle of the night and you will see clearer
still. However, human eyes still work best in daylight and they are no where near as good in dim light as
those creatures that are most active in the dark. The graph below shows the increase in sensitivity of
the compound eye of the rove beetle Aleochara bilineata upon dark adaptation:

Above: Dark adaptation in the compound eye of Aleochara bilineata . This was measured using the
electroretinogram (ERG) technique, which uses electrodes to measure electrical activity in the insect
eye in response to pulses of light. The adaptation of this insect's eye is particularly rapid, being
complete after 10 to 15 minutes, but these insects are beetles and many beetles are known to fly in dim
light fully diurnal insects may require nearer to 30 minutes to dark adapt (as does the onion fly, Delia
antiqua
, for example), rather like humans. However, fast-flying diurnal insects also possess eyes that
dark-adapt very rapidly, and Aleochara bilineata will take to the wing in direct sunlight. The electrical
response of the eye (and underlying nervous tissue) measured here is generally proportional to the log
of stimulus light intensity. Since the light stimulus has remained at constant brightness here the
increase is due to an increase in sensitivity of the eye by about 100-fold. (Data courtesy of Skilbeck, C
and Anderson, M).

When you look at a conventional CRT (cathode-ray tube) television screen the image that you see
refreshes 25 or 30 times a second (depending where you live) but the image looks continuous. (You
may detect some flicker as the images change over as the screen refreshes through the corner of your
eye). Many electric lights also flicker on and off at 100 or 120 times a second, but this is too fast for you
to notice (unless the light is old and the rate of flicker becomes much slower). For any visual stimulus
that blinks faster than the flicker-fusion frequency of your visual system, the flickers fuse into a single
continuous image and the flickering cannot be perceived. The flicker-fusion frequency of human vision
is 15-20 times a second, which is why you can just make out TV screens flickering. The electric light
flickers much too fast for you to see it flickering. However, the flicker-fusion frequency for a honeybee is
about 300, so the bee will see the light flickering. Thus, although the spatial visual acuity of the
honeybee visual system is only 1/100 to 1/60 that of the human eye, its temporal resolution is much
greater! This helps account for the very fast reflexes of many insects. The dragonfly can intercept a
flying prey insect on the wing because its vision responds much faster than a humans. Fast-flying
diurnal insects have very high flicker-fusion frequencies. Slow fliers, like the stick insect, Carausius ,
have flicker-fusion frequencies of about 40 per second.

Colour is what we perceive after our brains have processed visual information and represents the
wavelength of light coming from objects. ( Click here to learn all about waves and wavelength ). This is
an important point - what you see is what you perceive not simply what the eyes sense. Sensation is
the purely physical phenomenon whereby a sensor converts stimulus energy from the environment,
such as light, into encoded electrical signals in the nervous system. What you perceive is the result of
the nervous processing of these signals in the retina and brain as they are presented to the conscious.
We can never know what an insect perceives, but we can ascertain how its sensors work and how the
nervous system manipulates and modifies this information. We can never know whether or not an
insect perceives colour in the way that humans do. What we can ascertain is whether or not they see
and respond to colour.

Nobel Laureate Karl von Frisch conducted classical experiments with honeybees, in 1914, and
demonstrated that honeybees do see colour. He trained bees to associate the colour blue with food. He
placed out a checkerboard series of paper squares, one blue and the others varying shades of grey.
This was to demonstrate that bees did not simply recognise the blue square by seeing it as a particular
shade of grey. He also covered the papers in a glass plate to rule out any odours associated with the
blue paper in particular, in case the bees could smell that the paper was different. He placed an
identical clean and empty dish on each square, but only the dish on the blue square contained food.
This ruled out any visual cues that the bees might use to find the food. The bees quickly learned that
the blue square contained food. The position of this square in the checkerboard was changed every 20
minutes, to prevent the bees remembering its position, but they still flew straight to the blue square, no
matter where it was. Even when no food was provided, the bees would fly to the blue square initially,
expecting to find food there (though they would soon learn that the food was gone). This demonstrated
that bees have true colour vision, and also that they were capable of learning. Furthermore, the bees
could not be trained to respond to a grey, black or white square - so the bees really see the colour blue
and do not see it as a shade of grey. The reason why so many insect-pollinated flowers are large and
brightly coloured, is to advertise their presence to insects. Flowers provide both pollen and often nectar
for food for the insect, in return for pollen dispersal and delivery to recipient flowers.

Colour vision is an advanced feature. Most mammals, including cats and dogs, see only in grey, black
and white - they cannot see colour. If your dog recognises your red car, then it does not recognise it as
red since the car would look grey to the dog, but it would be recognising other features about the car.
To a lion, a zebra is camouflaged, since its black and white stripes blend in with its grey surroundings.
Primates, including humans, are the exception among mammals. Your primate ancestors probably
evolved colour vision as an aid to finding fruit in trees, since fruit was a staple part of their diet. Many
birds and some fish also have colour vision, indeed their colour vision may exceed that of humans in
terms of the variety of colours they can see. For this reason, a zebra would never attract mates with a
bright show of colours, but a bird might (apart from which birds can fly away from predators that may
spot their bright colours whilst a zebra can't!). The colour vision of birds also explains why insects need
authentic camouflage colours to avoid being spotted by their avian predators, hence many insects are
coloured in shades of green, yellow or brown. Many insects also advertise their bad taste or toxic
make-up or stinging ability to would be predators by having contrasting stripes, as in bees and wasps.
The birds will see the stripes and their colours, whilst most mammals will see the contrasting stripes as
shades of grey.

The pigments in the retina, or in the insect eye, that detect light are called rhodopsins. These pigments
come in several types, but each has its own distinct colour and so each absorbs and responds best to
certain colours or wavelengths of light. How many such pigments do humans need to see the vast
number of hues that they can see? Perhaps surprisingly, the answer is only three! Humans have
sensors in the retina that respond best to blue light (440 nm, actually blue-violet?), or best to green
light (545 nm) or to red light (actually to yellow, orange and red light) a fourth type of sensor sees only
shades of grey, black and white (it is achromatic). Most real colours are a mixture of red, green and
blue. For example, a colour like this sky blue is about 3 parts red, 4 parts green and 5 parts blue.
Assuming that you are not colour blind, then it stimulated the blue sensors in your retina most,
stimulated your green sensors quite a bit and your red sensors least of all. Your retina and brain then
blended the three colours to give the correct shade of blue. The whole spectrum of hues that humans
can see is generated by blending these three primary colours: red, green and blue. For this reason
humans have trichromatic colour vision (trichromatic literally means 'three-colour'). People who are
colour-blind, however, are usually dichromatic (they can only see two colours) although some people
may be totally colour blind and able to see only shades of grey, like our lion. Most birds and goldfish
and a few humans are tetrachromatic (they can see four primary colours) and some birds may be
pentachromatic (they can see five primary colours) and so are capable of seeing more hues than your
average human.

So, what about insects? The graph below shows an example of an insect visual spectrum, for the rove
beetle Aleochara bilineata . It tells us how sensitive the eye is to each wavelength of light:

Above: the visual spectrum of Aleochara bilineata (unpublished data courtesy of Skilbeck, C and
Anderson, M). This shows the electrical response of the compound eye (measured with electrodes) in
response to pulses of light of definite wavelength. Measuring the electrical response of an animal's eye to
light is a technique called the electroretinogram (ERG) . The greater the electrical response measured
from the ERG, the greater the sensitivity of the eye to the particular wavelength used. In this case we can
see peaks in sensitivity to light at around 365 nm (ultraviolet) and 545 nm (blue-green). The dotted
vertical line indicates the cut-off for human vision - humans cannot see wavelengths to the left of this line,
which are ultraviolet (UV). Humans also cannot see beyond about 750 nm, which is infrared (IR). Notice
that the longer the wavelength, the redder the light and the shorter the wavelength, the bluer the light.
This insect cannot see red light at all well, which is typical of many insects. It can, however, see ultraviolet
light clearly - so it can see some colours that humans can see, but it cannot see red very well.

The peak in the ultraviolet spectrum helps insects to navigate. Sunlight forms a pattern of polarised
ultraviolet light in the sky - a pattern that humans cannot see. This pattern indicates the position of the
Sun in the sky, even if it is cloudy, allowing insects to navigate by using the Sun as a compass, along with
their own internal biological clocks. The insect knows what time of day it is and thus it knows where the
Sun is in the sky and can use this to sense compass bearing, so it knows whether it is flying North, East,
South or West.

How does an electroretinogram work ?

By measuring the voltage generated across the insect eye in response to a pulse of light, the following
type of trace can be seen on an oscilloscope:

Again we have a peak in the ultraviolet - a very strong peak. Onion flies fly more than Aleochara and so
probably need to use the Solar compass more often and perhaps more precisely (?). Being a rove
beetle, Aleochara spends a lot of its time on or under the ground in burrows that it digs, folding its wings
under its elytra for protection. Aleochara will only fly in direct sunlight. The onion fly also has good
sensitivity in the green part of the spectrum, but it has greater sensitivity to blue light. Again this may
indicate a strong flier and as this insect feeds on onion plants, a high sensitivity to the blue-green leaves
of onions perhaps helps it to spot them more easily (though odours will also be very important).

So, these insects may at least be dichromatic (with UV and green sensors), possibly trichromatic (with
UV, blue and green). However, this data is not enough to prove that they have colour vision. Although
the eye has the necessary sensors, we do not know whether the brain interprets colours as brightness or
as colour. Behavioural experiments, similar to those done on bees, might be able to answer that. Some
moths have peaks in the UV, blue and green and also in the red or infrared and so may be
tetrachromatic.

[Technical note: it is important to calibrate the apparatus to ensure that the 'intensity' of light at all colours is the same,
otherwise, for example, if the blue light was brighter then this would bias the result. One way to do this is to keep the
energy of the light beam constant at all wavelengths, this yields what is called the spectral efficiency of the insect.
However, eyes do not respond to colours on the basis of the energy in the light beam, rather they respond to the number
of photons present in the beam, or more accurately the photon flux density. Keeping photon flux density across the
wavelengths measures what is called the spectral sensitivity. Note that since the energy of a photon, E = Planck's
constant (h) x frequency of light, the red light at 650 nm contains almost double the photon flux density as blue light at 350
nm. However, since the eye responds to the log of stimulus intensity, and when the eye is not very sensitive to red light,
as in this case, the spectral sensitivity curve differs little from the spectral efficiency curve, the peaks are simply shifted to
the right slightly. However, spectral sensitivity is the preferred method these days as it is considered more accurate and
representative (although one has to consider the quantum efficiency of the eye, so the spectral sensitivity does not
measure the response to a number of absorbed quanta, only to the number of incident quanta).]

Many insect-pollinated flowers contain ultraviolet pigments that only their pollinating insects (and perhaps
birds) can see. Many flowers are more strikingly coloured in the UV than in the visible spectrum.
Furthermore, markings, visible only in the UV, act as taxi markers to guide the landed insects to the
pollen and nectar food rewards.

So, we have seen that the insect compound eye is designed very differently to the vertebrate eye. The
insect eye has much poorer spatial resolution, due to its design constraints, but some have much higher
temporal resolution. Like mammals, insects can adapt to see in low light levels at night, and like primates
and birds, at least some of them can see colour. Most insects can also see ultraviolet light (whether as a
colour or as shades of grey) which helps them to navigate using the Sun. It has been said that insect
evolution hit a brick wall with the compound eye - unable to achieve better spatial resolution, but some
insects may have other types of eye that should have been able to evolve to become more like
vertebrate eyes. So, why doesn't any insect have visual acuity as high as a human? The answer
probably lies in neural processing. Even if an insect had an eye as spatially acute as a humans, where
would it fit the large brain required to process such detailed images? In the end, insect eyes are highly
adapted to the insect way of life. Indeed, insects rival the vertebrates as dominant terrestrial life-forms on
Earth, so they clearly are highly evolved!

Here is a simple experiment on insect vision that you can perform at home. All you need is a wooden
cone, 6 to 8 inches in length, to act as a mould (such as the handle of a paintbrush of appropriate size),
black paper, glue and tissue paper. Make a model of the insect eye by wrapping pieces of the black
paper around the mould to make cones and glue the overlapping edges together and cut half an inch
from the tip of each cone. Make about 20 or so such paper cones and then pack (and glue) them
together, with the wide ends directed outwards and the narrow cut ends forming part of the surface of a
sphere. A bit of tissue paper held against the narrow tips serves as the retina. Take the model into a
dark room with the wide ends directed toward a remote lamp, such that only one to three of the cones
are illuminated, and the rest will remain dark so long as the tissue paper is firmly pressed against their
narrow openings. This is an apposition eye, in which each cone or model ommatidium is optically isolated
and an accurate image is produced. Next, move the tissue paper away from the cones and the image will
become less distinct as light is scattered to neighbouring ommatidia, this is the superposition eye. (Idea
taken from: Simple Experiments with Insects, by H. Kalmus, Heinemann press).


Discussion

A small desert-living parrot with relatively low FFF

Our results show that the highest flicker fusion frequency, the CFF, of budgerigars occurs at much higher luminances than previously assumed—at least at 3500 or 7200 cd/m 2 , possibly even higher. This is brighter than for any other tested bird species, but follows our expectations since wild budgerigars live in extremely bright, open habitats in the Australian desert, and hence should be adapted to high light intensities. Just like passerines, they have a cone-dominated retina with 2.1 times as many cones as rods (Lind and Kelber 2009). However, the highest FFFs in our experiments, in the frequency range between 77 and 93 Hz, are much lower than CFFs for some fast flying insects, but also birds such as blue tits and Old World flycatchers (Boström et al. 2016) and en par with results from domestic chicken (Lisney et al. 2011). Since budgerigars are much closer in size and flight behavior to the passerines than to chickens, our result suggests that small size and airborne agility per se does not lead to very high temporal visual resolution. Furthermore, the fact that domestic chicken are descendants from red jungle fowl, living in the dim undergrowth of tropical forests, does not support bright habitats as an explanation to the extreme CFFs found in blue tits and flycatchers.

Our results do, however, support that extreme temporal visual acuity may be a synapomorphic trait for passerines, as there is no conclusive evidence for CFFs in passerines being even nearly as low as in the Psittaciform budgerigar. Crozier and Wolf (1941, 1944) reported 55 Hz CFFs in two passerines, zebrafinch (Taeniopygia guttata) and house sparrow (Passer domesticus), but because their experimental design recorded optomotor responses, which are limited by both spatial and temporal resolution, the true CFFs may have been underestimated.

It is also possible that lifestyles requiring accurate tracking of rapid motion are driving the evolution of temporal acuity in birds, as has been shown for insects (e.g., Autrum 1949 Autrum and Stoecker 1950 Laughlin and Weckström 1993 Weckström and Laughlin 1995). Budgerigars have different feeding habits from the passerine species tested by Boström et al. (2016). Both pied and collared flycatchers have a diet dominated by insects, while insects form a smaller but significant part of the diet of blue tits (del Hoyo et al. 2006, 2007). Catching flying insects on the wing should exert a high pressure on temporal visual acuity and is likely to have pushed the CFFs of these species, especially in the flycatchers. Budgerigars and chickens, on the other hand, have diets predominated by seeds or slow moving insects, putting less pressure on temporal visual acuity. Another ecological difference between blue tits/flycatchers and budgerigars/chicken are their habitats. Blue tits and flycatchers lead airborne lives in forests, constituting quite complex environments, which may also require high temporal acuities in order for the birds to be able to move and manoeuver fast through the canopies. Red jungle fowl also live in forests, but they move slower and mostly walk around on the ground, a behavior that might not require as high temporal acuities. Budgerigars live in open habitats, facing less risks of colliding with branches and trees, and hence may also perform well with slower vision.

Our experimental design was rather similar to those used for chicken (e.g., Lisney et al. 2011) and passerines (Boström et al. 2016), making our results comparable to the CFFs behaviorally determined in these other bird species. It might, however, be problematic to draw ecological conclusions based on experiments with domesticated budgerigars and chicken, since there are indications that domestication may have had some detrimental effect on the visual system in domestic birds (Lisney et al. 2011 Roth and Lind 2013). Since our test animals had not been caught in the wild, we cannot control for the possibility that loss of visual acuity has occurred in budgerigars during domestication and artificial selection for different color varieties (but see Jeffery and Williams 1994).

Comparison to Ginsburg and Nilsson (1971)

Our results on FFF in budgerigars in high light intensities appear somewhat low compared to what Ginsburg and Nilsson (1971) found for lower intensities (Fig. 2b). One difference between our study and theirs is that their light stimuli did not include UV light. A study on domestic chicken by Rubene et al. (2010) found that excluding UV light from the stimuli resulted in significantly lower FFF values than if the stimuli contained full spectrum light. On the other hand, judging by visual examination of the graph in Fig. 2b, the FFFs measured by Ginsburg and Nilsson at lower luminances are not lower than expected by our data, if anything they are higher than our curve would suggest by extrapolation.

Other differences between both studies are the applied training and testing regimes. The two budgerigars tested by Ginsburg and Nilsson (1971) were closer to the stimulus, and they were not presented a choice between two stimuli, but trained to peck at a key if the presented light appeared constant. Shortening the distance to the stimulus will increase the size of its image on the retina. In humans this is known to increase FFF, according to the Granit-Harper Law (Granit and Harper 1930). Ginsburg and Nilsson (1971) also started their experiment at high frequencies and decreased the frequency until the bird ceased pecking at the key, whereas our experiment started at lower frequencies which were increased until the bird failed to separate between the two stimuli. Finally, Ginsburg and Nilsson (1971) likely used relatively higher ambient light levels, compared to stimulus luminance, than we did. As we found a difference between FFF with different ambient light levels in the test with 1500 cd/m 2 , this may also have influenced the results. However, we consider the results with high luminance are most relevant for a desert bird, which is only active at daytime, in very bright light.

Are pet budgerigars seeing the flicker of lamps?

Do budgerigars see the flicker of fluorescent tubes or LED lamps in homes or in pet shops, and does this illumination stress the birds? We aimed at investigating whether the welfare of budgerigars in captivity may be impaired by flickering light, as it should appear flickering to the birds if their FFFs exceed 100–120 Hz. None of the budgerigars in our experiment had FFFs above 100 Hz in any of the tested light intensities, so it is unlikely that they would suffer under fluorescent lights. The European standard for workspace illumination (EN 12464-2:2007) requires a luminance of 500 cd/m 2 at desks and 100 cd/m 2 in the general work space. Homes illuminated by fluorescent tubes in bright living rooms may be twice as bright. We also measured the luminance in an office, which is a rather bright environment, lit by fluorescent lamps. Straight under the lamp the luminance was approximately 1000 cd/m 2 and the luminance dropped quite quickly with increased distance from the lamp. At 750 cd/m 2 budgerigars and humans did not differ very much in FFF (Fig. 2b), suggesting that even budgerigars exposed to worn fluorescent lamps with flicker frequencies below 100 Hz should probably not detect the flicker more than their human care takers, minimizing the risks of impaired welfare for domestic budgerigars.

Even if budgerigars are unlikely to perceive 100 Hz flicker from artificial lighting it may still cause distress if the retina responds to it. Humans, who normally do not perceive 100 Hz flicker consciously, can still suffer from exposure to it. It can cause headaches, eyestrain, anxiety and changes in eye-saccades (Wilkins et al. 1989), disturb perception of rapid continuous motion (Maddocks et al. 2001) and affect the brain (e.g., Kuller and Laike 1998) or the immune system (Martin 1989). Hence, it is important to study flicker sensitivity in tame birds not only at a cognitive but also at the retinal level, using ERG.

Our study indicates that high temporal resolution is probably not a trait common for all small, active birds, since budgerigars and domestic chicken seem to fall within the same range, whereas vision of the studied passerine species has higher temporal resolution. We consider it more likely that very high temporal resolution of vision may be a synapomorphic trait for passerines or an adaptive trait connected to airborne insectivory or lifestyles of fast flight in complex environments, in a similar ways as has been shown for fast flying insects (see above). Clearly more bird species need to be studied, to resolve this question.


References

Anderson, D. M. Mosby’s Medical Dictionary. at https://medical-dictionary.thefreedictionary.com/spatial+resolution (2009).

Land, M. F. & Nilsson, D.-E. Animal eyes. Oxford Anim. Biol. Ser. 2, 221 (2002).

Fein, A. & Szuts, E. Z. Photoreceptors: Their role in vision. CUP Archive 5, (Cambridge University Press, 1982).

Ingle, D., Jeannerod, M. & Lee, D. N. Brain mechanisms and spatial vision. Springer Science & Business Media (1984).

Neri, P. & Levi, D. M. Spatial resolution for feature binding is impaired in peripheral and amblyopic vision. J. Neurophysiol. 96, 142–153 (2006).

Krauskopf, J. & Mollon, J. D. The independence of the temporal integration properties of individual chromatic mechanisms in the human eye. J. Physiol. 219, 611–623 (1971).

Paul, R. & Mark-Paul, M. Restoration of Motion Picture Film. (Butterworth-Heinemann, 2000).

Farrell, J. E., Benson, B. L., Haynie, C. R., Packard, H. & Ca, P. A. Predicting flicker thresholds for video display terminals. Proc. SID 28, 449–453 (1987).

Brown, J. In The Oxford Handbook of Film Music Studies (ed. Neumeyer, D.) 588 (Oxford University Press, 2014).

Zlody, R. The relationship between critical flicker frequency (CFF) and several intellectual measures. Psychol 78, 596–60 (1965).

Tanner, W. P. A Preliminary Investigation of the Relationship between Visual Fusion of Intermittent Light and Intelligence. Science (80-.). 112, 201–203 (1950).

Bonneh, Y. S., Sagi, D. & Polat, U. Spatial and temporal crowding in amblyopia. Vision Res. 47, 1950–1962 (2007).

Lev, M. et al. Training improves visual processing speed and generalizes to untrained functions. Sci. Rep. 4, 7251 (2014).

Lev, M. & Polat, U. Space and time in masking and crowding. J. Vis. 15, 10 (2015).

Polat, U. & Sagi, D. Temporal asymmetry of collinear lateral interactions. Vision Res. 46, 953–960 (2006).

Breitmeyer, B. Visual Masking: an Integrative Approach. (Clarendon Press, 1984).

Raymond, J. E., Shapiro, K. L. & Arnell, K. M. Temporary suppression of visual processing in an RSVP task: an attentional blink? J. Exp. Psychol. Hum. Percept. Perform. 18, 849–60 (1992).

De Bruijn, O. & Spence, R. Rapid Serial Visual Presentation: A space-time trade-off in information presentation. Proc. Work. Conf. Adv. Vis. interfaces https://doi.org/10.1145/345513.345309 189–192 (2000).

Seitz, A. R., Nanez, J. E., Holloway, S. R. & Watanabe, T. Perceptual learning of motion leads to faster flicker perception. PLoS One 1, 1–9 (2006).

Tyler, C. W. Analysis of visual modulation sensitivity. III. Meridional variations in peripheral flicker sensitivity. J. Opt. Soc. Am. A 4, 1612–1619 (1987).

Verrij, D. & Hecht, S. The influence of intensity, color and retinal location on the fusion frequency of intermittent illumination. laboratory of biophysics. Proc Natl Acad Sci USA 19, 522–535 (1933).

Hartmann, E., Lachenmayr, B. & Brettel, H. The peripheral critical flicker frequency. Vision Res. 19, 1019–1023 (1979).

Hecht, S. & Shlaer, S. J. Intermittent stimulation by light: The relation between intensity and critical frequen for different parts of the spectrum. Gen Physiol 19, 965–77 (1936).

Tyler, C. W. Analysis of visual modulation sensitivity. II. Peripheral retina and the role of photoreceptor dimensions. J. Opt. Soc. Am. A. 2, 393–398 (1985).

Kircheis, G., Wettstein, M., Timmermann, L., Schnitzler, A. & Häussinger, D. Critical flicker frequency for quantification of low-grade hepatic encephalopathy. Hepatology 35, 357–66 (2002).

Salmi, T. Critical flicker frequencies in MS patients with normal or abnormal pattern VEP. Acta Neurol. Scand. 71, 354–358 (1985).

Petzold, A. & Plant, G. T. Clinical disorders affecting mesopic vision. Ophthalmic and Physiological Optics 26, 326–341 (2006).

Phipps, J. A., Guymer, R. H. & Vingrys, A. J. Loss of cone function in age-related maculopathy. Investig. Ophthalmol. Vis. Sci. 44, 2277–2283 (2003).

Mayer, M. J. et al. Flicker sensitivity and fundus appearance in pre-exudative age-related maculopathy. Investig. Ophthalmol. Vis. Sci. 35, 1138–1149 (1994).

Feigl, B., Cao, D., Morris, C. P. & Zele, A. J. Persons with age-related maculopathy risk genotypes and clinically normal eyes have reduced mesopic vision. Invest. Ophthalmol. Vis. Sci. 52, 1145–50 (2011).

Shankar, H. & Pesudovs, K. Critical flicker fusion test of potential vision. J. Cataract Refract. Surg. 33, 232–9 (2007).

Behrend, K., Benkner, B. & Mora-Ferrer, C. Temporal resolution and temporal transfer properties: gabaergic and cholinergic mechanisms. Vis. Neurosci. 24, 787–797 (2007).

O’Connor, M., Nilsson, D. E. & Garm, A. Temporal properties of the lens eyes of the box jellyfish Tripedalia cystophora. J. Comp. Physiol. A Neuroethol. Sensory, Neural, Behav. Physiol. 196, 213–220 (2010).

Miller, R. F. D. J. Intracellular responses of the Müller (glial) cells of mudpuppy retina: their relation to b-wave of the electroretinogram. J Neurophysiol. 33, 323–41 (1970).

Peachey, N. S., Alexander, K. R., Derlacki, D. J. & Fishman, G. A. Light adaptation, rods, and the human cone flicker ERG. Vis. Neurosci. 8, 145–150 (1992).

Pelli, D. G. & Farell, B. In Handbook of Optics (eds Bass, n M. et al.) 1–12 (1995).

Pashler, H. A. L. & Yantis, S. Steven’s handbook of experimental psychology. Experimental Psychology formerly Zeitschrift für 4, (John Wiley & Sons, Inc., 2002).

Haggard, M. Hearing: An Introduction to Psychological and Physiological Acoustics. J. Neurol. Neurosurg. Psychiatry 45, 1175 (1982).

Merfeld, D. M. Signal detection theory and vestibular thresholds: I. Basic theory and practical considerations. Exp. Brain Res. 210, 389–405 (2011).

Levi, D. M., Polat, U. & Hu, Y. S. Improvement in Vernier acuity in adults with amblyopia: Practice makes better. Investig. Ophthalmol. Vis. Sci. 38, 1493–1510 (1997).

Polat, U. Making perceptual learning practical to improve visual functions. Vision Res. 49, 2566–2573 (2009).

Simpson, W. A. The method of constant stimuli is efficient. Percept. Psychophys. 44, 433–436 (1988).

Laming, D. & Laming, J. F. Hegelmaier: On memory for the length of a line. Psychol. Res. 54, 233–239 (1992).

Carmel, D., Saker, P., Rees, G. & Lavie, N. Perceptual load modulates conscious flicker perception. J. Vis. 7, 14.1–13 (2007).

Feshchenko, V. A., Reinsel, R. A. & Veselis, R. A. Optimized method of estimation of critical flicker frequency (CFF). Proc. Annu. Symp. Comput. Appl. Med. Care 15, 1006 (1994).

Maeda, E. et al. Radiology reading-caused fatigue and measurement of eye strain with critical flicker fusion frequency. Jpn. J. Radiol. 29, 483–7 (2011).

Davranche, K. & Pichon, A. Critical flicker frequency threshold increment after an exhausting exercise. J. Sport Exerc. Psychol. 27, 515–520 (2005).

Nardella, A. et al. Inferior parietal lobule encodes visual temporal resolution processes contributing to the critical flicker frequency threshold in humans. PLoS One 9 (2014).

Mitsuhashi, T. Measurement and Analysis Methods for Critical Flicker Frequency and Observer Fatigue Caused by Television Watching. Electron. Commun. Japan 78, 1–12 (1995).

Eisenbarth, W., Mackeben, M., Poggel, Da & Strasburger, H. Characteristics of dynamic processing in the visual field of patients with age-related maculopathy. Graefes Arch. Clin. Exp. Ophthalmol. 246, 27–37 (2008).

Maruthy, K. & Endukuru, D. T. S. C. kumar. A Study of Critical Flickering Fusion Frequency Rate in Media Players. Int. J. Curr. Med. Pharm. Res. 1, 23–27 (2015).

Zhou, T., Jose E. N, Zimmerman, D., Holloway, S. R. & Seitz, A. Two Visual Training Paradigms Associated with Enhanced Critical Flicker Fusion Threshold. Front Psychol 7 (2016).

Bovier, E. R., Renzi, L. M. b. & Hammond, B. R. A double-blind, placebo-controlled study on the effects of lutein and zeaxanthin on neural processing speed and efficiency. PLoS One 9 (2014).

Lafère, P., Balestra, C. & Hemelryck, W. Evaluation of critical flicker fusion frequency and perceived fatigue in divers after air and enriched air nitrox diving. Diving Hyperb. Med. 40 (2010).

Windhorst, U. & Johansson, H. Modern Techniques in Neuroscience Research. (Springer Science & Business Media, 2012).

Salib, Y., Plourde, G., Alloul, K., Provost, A. & Moore, A. Measuring recovery from general anaesthesia using critical flicker frequency: a comparison of two methods. Can. J. Anaesth. 39, 1045–1050 (1992).

Haselton, M. G., Nettle, D. & Andrews, P. W. The evolution of cognitive bias. The Handbook of Evolutionary Psychology 724–746, at http://www.sscnet.ucla.edu/comm/haselton/papers/downloads/handbookevpsych.pdf (2005).

Bless, H., Fiedler, K. & Strack, F. Social Cognition: How Individuals Construct Social Reality. Social psychology (Psychology Press, 2004).

Meese, T. S. Using the standard staircase to measure the point of subjective equality: a guide based on computer simulations. Percept. Psychophys. 57, 267–281 (1995).

Polat, U., Sterkin, A. & Yehezkel, O. Spatio-temporal low-level neural networks account for visual masking. Adv. Cogn. Psychol. 3, 153–165 (2007).

Yehezkel, O., Sterkin, A., Lev, M. & Polat, U. Training on spatiotemporal masking improves crowded and uncrowded visual acuity. J. Vis. 15, 1–18 (2015).

Polat, U. et al. Training the brain to overcome the effect of aging on the human eye. Sci. Rep. 2, 2–7 (2012).

Johnson, C. A., Chauhan, B. C., Shapiro, L. R. & Yoshiyama, K. K. Which method of flicker perimetry is most effective for detection of glaucomatous visual field loss? Investig. Ophthalmol. Vis. Sci. 38, 2270–2277 (1997).

Pesudovs, K., Noble, B. A. & Elliott, D. B. Development of a Critical Flicker/Fusion Media Opacities. Optom. Vis. Sci. 81, 905–910 (2004).

Maeda, E. et al. Radiology reading-caused fatigue and measurement of eye strain with critical flicker fusion frequency. Jpn. J. Radiol. 29, 483–487 (2011).

Guttman, L. A basis for analyzing test-retest reliability. Psychometrika 10, 255–282 (1945).

Giavarina, D. Understanding Bland Altman analysis. Biochem. medica 25, 141–51 (2015).

Martin Bland, J. & Altman, D. Statistical methods for assessing agreement between two methods of clinical measurement. Lancet 327, 307–310 (1986).

Wesnes, K. & Simpson P, C. I. The assessment of human information - processing abilities in psychophar- maeology. Hum Psychopharmaco 1, 79–92 (1987).

Simonson, E. B. J. Flicker fusion frequency. Back- ground and applications. Physiol Rev 32, 349–78 (1952).

De Weerd, P., Vandenbussche, E. & Orban, G. A. Staircase procedure and constant stimuli method in cat psychophysics. Behav. Brain Res. 40, 201–214 (1990).

Mandel, Y., Belkin, M., Yehezkel, O., Rosner, M. & Polat, U. Measurement of dark adapted foveal contrast sensitivity: effect of age. Ophthalmic Technologies XV 5688, 1–8 (2005).

Kingdom, F. A. A. & Nicolaas, P. Psychophysics: A Practical Introduction. (Academic Press, 2009).

Watson, A. B. & Pelli, D. G. QUEST: a Bayesian adaptive psychometric method. Percept. Psychophys. 33, 113–120 (1983).

Lesmes, L. A., Lu, Z.-L., Baek, J. & Albright, T. D. Bayesian adaptive estimation of the contrast sensitivity function: The quick CSF method. J. Vis. https://doi.org/10.1167/10.3.17 (2010).

Kontsevich, L. L. & Tyler, C. W. Bayesian adaptive estimation of psychometric slope and threshold. Vision Res. https://doi.org/10.1016/S0042-6989(98)00285-5 (1999).

Wooten, B. R., Renzi, L. M., Moore, R. & Hammond, B. R. A practical method of measuring the human temporal contrast sensitivity function. Biomed. Opt. Express (2010).

Gescheider, L. E. M. and G. A. In Stevens’ handbook of experimental psychology (ed. PASHLER, J. W. H.) 91–138 (John Wiley & Sons, Inc., 2002).

Wichmann, F. A. & Hill, N. J. The psychometric function: I. Fitting, sampling, and goodness of fit. Percept. Psychophys. https://doi.org/10.3758/BF03194544 (2001).

Levitt, H. Transformed up- down methods in psychoacoustics. J. Acoust. Soc. Am. 49, 467–477 (1971).

Kanski, J. J. Clinical opthalmology –a systematic approach. (Butterworth heinemann –elsevier, 2007).


Small birds' vision: Not so sharp but superfast

One may expect a creature that darts around its habitat to be capable of perceiving rapid changes as well. Yet birds are famed more for their good visual acuity. Joint research by Uppsala University, Stockholm University and the Swedish University of Agricultural Sciences (SLU) now shows that, in small passerines (perching birds) in the wild, vision is considerably faster than in any other vertebrates—and more than twice as fast as ours.

The new research findings are published today in PLOS ONE.

In behavioural experiments, the scientists have studied the ability to resolve visual detail in time in three small wild passerine species: blue tit, collared flycatcher and pied flycatcher. This ability is the temporal resolution of eyesight, i.e. the number of changes per second an animal is capable of perceiving. It may be compared to spatial resolution (visual acuity), a measure of the number of details per degree in the field of vision.

The researchers trained wild-caught birds to receive a food reward by distinguishing between a pair of lamps, one flickering and one shining a constant light. Temporal resolution was then determined by increasing the flicker rate to a threshold at which the birds could no longer tell the lamps apart. This threshold, known as the CFF (critical flicker fusion rate), averaged between 129 and 137 hertz (Hz). In the pied flycatchers it reached as high as 146 Hz, some 50 Hz above the highest rate encountered for any other vertebrate. For humans, the CFF is usually approximately 60 Hz. For passerines, the world might to be said to be in slow motion compared with how it looks to us.

The video clip visualizes one advantage of the ultra rapid vision discovered in birds. The almost three times faster refreshment rate of visual input in a pied flycatcher than in a human makes it far easier to track and predict the flight paths of two blue bottle flies. This is most likely a crucial ability for a bird that catches its airborne prey on the wing. Credit: Malin Thyselius

It has been argued before, but never investigated, that small and agile wild birds should have extremely fast vision. Nevertheless, the blue tits and flycatchers proved to have higher CFF rates than were predicted from their size and metabolic rates. This indicates an evolutionary history of natural selection for fast vision in these species. The explanation might lie in small airborne birds' need to detect and track objects whose image moves very swiftly across the retina—for blue tits, for example, to be able to see and avoid all branches when they take cover from predators by flying straight into bushes. Moreover, the three avian species investigated all, to a varying degree, subsist on the insects they catch. Flycatchers, as their name suggests, catch airborne insects. For this ability, aiming straight at the insect is not enough. Forward planning is required: the bird needs high temporal resolution to track the insect's movement and predict its location the next instant.

The new results give some cause for concern about captive birds' welfare. Small passerines are commonly kept in cages, and may be capable of seeing roughly as fast as their wild relatives. With the phase-out of incandescent light bulbs for reasons of energy efficiency, tame birds are increasingly often kept in rooms lit with low-energy light bulbs, fluorescent lamps or LED lighting. Many of these flicker at 100 Hz, which is thus invisible to humans but perhaps not to small birds in captivity. Studies have shown that flickering light can cause stress, behavioural disturbances and various forms of discomfort in humans and birds alike.

Of all the world's animals, the eagle has the sharpest vision. It can discern 143 lines within one degree of the field of vision, while a human with excellent sight manages about 60. The magnitude of this difference is almost exactly the same as between a human's top vision speed and a pied flycatcher's: 60 and 146 Hz respectively. Thus, the flycatcher's vision is faster than human vision to roughly the same extent as an eagle's vision is sharper. So small passerines' rapid vision is an evolutionary adaptation just as impressive as the sharp eyesight of birds of prey.

Anders Ödeen, the lecturer at Uppsala University's Department of Ecology and Genetics who headed the study, puts the research findings in perspective.

'Fast vision may, in fact, be a more typical feature of birds in general than visual acuity. Only birds of prey seem to have the ability to see in extremely sharp focus, while human visual acuity outshines that of all other bird species studied. On the other hand, there are lots of bird species similar to the blue tit, collared flycatcher and pied flycatcher, both ecologically and physiologically, so they probably also share the faculty of superfast vision.'


Massive 'Darth Vader' isopod found lurking in the Indian Ocean

The father of all giant sea bugs was recently discovered off the coast of Java.

A close up of Bathynomus raksasa

  • A new species of isopod with a resemblance to a certain Sith lord was just discovered.
  • It is the first known giant isopod from the Indian Ocean.
  • The finding extends the list of giant isopods even further.

Humanity knows surprisingly little about the ocean depths. An often-repeated bit of evidence for this is the fact that humanity has done a better job mapping the surface of Mars than the bottom of the sea. The creatures we find lurking in the watery abyss often surprise even the most dedicated researchers with their unique features and bizarre behavior.

A recent expedition off the coast of Java discovered a new isopod species remarkable for its size and resemblance to Darth Vader.

The ocean depths are home to many creatures that some consider to be unnatural.

According to LiveScience, the Bathynomus genus is sometimes referred to as "Darth Vader of the Seas" because the crustaceans are shaped like the character's menacing helmet. Deemed Bathynomus raksasa ("raksasa" meaning "giant" in Indonesian), this cockroach-like creature can grow to over 30 cm (12 inches). It is one of several known species of giant ocean-going isopod. Like the other members of its order, it has compound eyes, seven body segments, two pairs of antennae, and four sets of jaws.

The incredible size of this species is likely a result of deep-sea gigantism. This is the tendency for creatures that inhabit deeper parts of the ocean to be much larger than closely related species that live in shallower waters. B. raksasa appears to make its home between 950 and 1,260 meters (3,117 and 4,134 ft) below sea level.

Perhaps fittingly for a creature so creepy looking, that is the lower sections of what is commonly called The Twilight Zone, named for the lack of light available at such depths.

It isn't the only giant isopod, far from it. Other species of ocean-going isopod can get up to 50 cm long (20 inches) and also look like they came out of a nightmare. These are the unusual ones, though. Most of the time, isopods stay at much more reasonable sizes.

The discovery of this new species was published in ZooKeys. The remainder of the specimens from the trip are still being analyzed. The full report will be published shortly.

What benefit does this find have for science? And is it as evil as it looks?

The discovery of a new species is always a cause for celebration in zoology. That this is the discovery of an animal that inhabits the deeps of the sea, one of the least explored areas humans can get to, is the icing on the cake.

Helen Wong of the National University of Singapore, who co-authored the species' description, explained the importance of the discovery:

"The identification of this new species is an indication of just how little we know about the oceans. There is certainly more for us to explore in terms of biodiversity in the deep sea of our region."

The animal's visual similarity to Darth Vader is a result of its compound eyes and the curious shape of its head. However, given the location of its discovery, the bottom of the remote seas, it may be associated with all manner of horrifically evil Elder Things and Great Old Ones.


David Nyberg designed the circuitry for the display system. Jack Morrison of Digital Insight fabricated the system, wrote the firmware and software required for its operation, calibrated emission intensity and duration, and contributed to the evaluation and interpretation of results in numerous ways. Respondents were tested by Adrienne Visante, Bo Yan Moran, and Katherine Larson. Statistical analysis and modeling was provided by Daniel Eastwood and Dr. Aniko Szabo, Biostatistics Consulting Service, Medical College of Wisconsin. This research was supported by the Neuropsychology Foundation and the Quest for Truth Foundation.

Funds were provided by the Neuropsychology Foundation and the Quest for Truth Foundation, as described in Acknowledgments. These organizations have no vested interest in the results of the investigation. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.


Watch the video: Insect Vision: Ommatidium Structure and Function (August 2022).