Blog

  • Television 

    Television is a telecommunication medium for transmitting moving images and sound. Additionally, the term can refer to a physical television set rather than the medium of transmission. Television is a mass medium for advertising, entertainment, news, and sports. The medium is capable of more than “radio broadcasting,” which refers to an audio signal sent to radio receivers.

    Television became available in crude experimental forms in the 1920s, but only after several years of further development was the new technology marketed to consumers. After World War II, an improved form of black-and-white television broadcasting became popular in the United Kingdom and the United States, and television sets became commonplace in homes, businesses, and institutions. During the 1950s, television was the primary medium for influencing public opinion.[1] In the mid-1960s, color broadcasting was introduced in the U.S. and most other developed countries.

    The availability of various types of archival storage media such as Betamax and VHS tapes, LaserDiscs, high-capacity hard disk drivesCDsDVDsflash drives, high-definition HD DVDs and Blu-ray Discs, and cloud digital video recorders has enabled viewers to watch pre-recorded material—such as movies—at home on their own time schedule. For many reasons, especially the convenience of remote retrieval, the storage of television and video programming now also occurs on the cloud (such as the video-on-demand service by Netflix). At the beginning of the 2010s, digital television transmissions greatly increased in popularity. Another development was the move from standard-definition television (SDTV) (576i, with 576 interlaced lines of resolution and 480i) to high-definition television (HDTV), which provides a resolution that is substantially higher. HDTV may be transmitted in different formats: 1080p1080i and 720p. Since 2010, with the invention of smart televisionInternet television has increased the availability of television programs and movies via the Internet through streaming video services such as Netflix, Amazon Prime VideoiPlayer and Hulu.

    In 2013, 79% of the world’s households owned a television set.[2] The replacement of earlier cathode-ray tube (CRT) screen displays with compact, energy-efficient, flat-panel alternative technologies such as LCDs (both fluorescent-backlit and LED), OLED displays, and plasma displays was a hardware revolution that began with computer monitors in the late 1990s. Most television sets sold in the 2000s were still CRT, it was only in early 2010s that flat-screen TVs decisively overtook CRT.[3] Major manufacturers announced the discontinuation of CRT, Digital Light Processing (DLP), plasma, and even fluorescent-backlit LCDs by the mid-2010s.[4][5] LEDs are being gradually replaced by OLEDs.[6] Also, major manufacturers have started increasingly producing smart TVs in the mid-2010s.[7][8][9] Smart TVs with integrated Internet and Web 2.0 functions became the dominant form of television by the late 2010s.[10][better source needed]

    Television signals were initially distributed only as terrestrial television using high-powered radio-frequency television transmitters to broadcast the signal to individual television receivers. Alternatively, television signals are distributed by coaxial cable or optical fibersatellite systems, and, since the 2000s, via the Internet. Until the early 2000s, these were transmitted as analog signals, but a transition to digital television was expected to be completed worldwide by the late 2010s. A standard television set consists of multiple internal electronic circuits, including a tuner for receiving and decoding broadcast signals. A visual display device that lacks a tuner is correctly called a video monitor rather than a television.

    The television broadcasts are mainly a simplex broadcast meaning that the transmitter cannot receive and the receiver cannot transmit.

    Etymology

    The word television comes from Ancient Greek τῆλε (tele) ‘far’ and Latin visio ‘sight’. The first documented usage of the term dates back to 1900, when the Russian scientist Constantin Perskyi used it in a paper that he presented in French at the first International Congress of Electricity, which ran from 18 to 25 August 1900 during the International World Fair in Paris.

    The anglicized version of the term is first attested in 1907, when it was still “…a theoretical system to transmit moving images over telegraph or telephone wires“.[11] It was “…formed in English or borrowed from French télévision.”[11] In the 19th century and early 20th century, other “…proposals for the name of a then-hypothetical technology for sending pictures over distance were telephote (1880) and televista (1904).”[11]

    The abbreviation TV is from 1948. The use of the term to mean “a television set” dates from 1941.[11] The use of the term to mean “television as a medium” dates from 1927.[11]

    The term telly is more common in the UK. The slang term “the tube” or the “boob tube” derives from the bulky cathode-ray tube used on most TVs until the advent of flat-screen TVs. Another slang term for the TV is “idiot box.”[12]

    History

    Main article: History of television

    Mechanical

    Main article: Mechanical television

    The Nipkow disk. This schematic shows the circular paths traced by the holes that may also be square for greater precision. The area of the disk outlined in black displays the region scanned.

    Facsimile transmission systems for still photographs pioneered methods of mechanical scanning of images in the early 19th century. Alexander Bain introduced the facsimile machine between 1843 and 1846. Frederick Bakewell demonstrated a working laboratory version in 1851.[citation needed] Willoughby Smith discovered the photoconductivity of the element selenium in 1873. As a 23-year-old German university student, Paul Julius Gottlieb Nipkow proposed and patented the Nipkow disk in 1884 in Berlin.[13] This was a spinning disk with a spiral pattern of holes, so each hole scanned a line of the image. Although he never built a working model of the system, variations of Nipkow’s spinning-disk “image rasterizer” became exceedingly common.[14] Constantin Perskyi had coined the word television in a paper read to the International Electricity Congress at the International World Fair in Paris on 24 August 1900. Perskyi’s paper reviewed the existing electromechanical technologies, mentioning the work of Nipkow and others.[15] However, it was not until 1907 that developments in amplification tube technology by Lee de Forest and Arthur Korn, among others, made the design practical.[16]

    The first demonstration of the live transmission of images was by Georges Rignoux and A. Fournier in Paris in 1909. A matrix of 64 selenium cells, individually wired to a mechanical commutator, served as an electronic retina. In the receiver, a type of Kerr cell modulated the light, and a series of differently angled mirrors attached to the edge of a rotating disc scanned the modulated beam onto the display screen. A separate circuit regulated synchronization. The 8×8 pixel resolution in this proof-of-concept demonstration was just sufficient to clearly transmit individual letters of the alphabet. An updated image was transmitted “several times” each second.[17]

    In 1911, Boris Rosing and his student Vladimir Zworykin created a system that used a mechanical mirror-drum scanner to transmit, in Zworykin’s words, “very crude images” over wires to the “Braun tube” (cathode-ray tube or “CRT”) in the receiver. Moving images were not possible because, in the scanner: “the sensitivity was not enough and the selenium cell was very laggy”.[18]

    In 1921, Édouard Belin sent the first image via radio waves with his belinograph.[19]

    Baird in 1925 with his televisor equipment and dummies “James” and “Stooky Bill” (right)

    By the 1920s, when amplification made television practical, Scottish inventor John Logie Baird employed the Nipkow disk in his prototype video systems. On 25 March 1925, Baird gave the first public demonstration of televised silhouette images in motion at Selfridges‘s department store in London.[20] Since human faces had inadequate contrast to show up on his primitive system, he televised a ventriloquist’s dummy named “Stooky Bill,” whose painted face had higher contrast, talking and moving. By 26 January 1926, he had demonstrated before members of the Royal Institution the transmission of an image of a face in motion by radio. This is widely regarded as the world’s first true public television demonstration, exhibiting light, shade, and detail.[21] Baird’s system used the Nipkow disk for both scanning the image and displaying it. A brightly illuminated subject was placed in front of a spinning Nipkow disk set with lenses that swept images across a static photocell. The thallium sulfide (thalofide) cell, developed by Theodore Case in the U.S., detected the light reflected from the subject and converted it into a proportional electrical signal. This was transmitted by AM radio waves to a receiver unit, where the video signal was applied to a neon light behind a second Nipkow disk rotating synchronized with the first. The brightness of the neon lamp was varied in proportion to the brightness of each spot on the image. As each hole in the disk passed by, one scan line of the image was reproduced. Baird’s disk had 30 holes, producing an image with only 30 scan lines, just enough to recognize a human face.[22] In 1927, Baird transmitted a signal over 438 miles (705 km) of telephone line between London and Glasgow.[23] Baird’s original ‘televisor’ now resides in the Science Museum, South Kensington.

    In 1928, Baird’s company (Baird Television Development Company/Cinema Television) broadcast the first transatlantic television signal between London and New York and the first shore-to-ship transmission. In 1929, he became involved in the first experimental mechanical television service in Germany. In November of the same year, Baird and Bernard Natan of Pathé established France’s first television company, Télévision-Baird-Natan. In 1931, he made the first outdoor remote broadcast of The Derby.[24] In 1932, he demonstrated ultra-short wave television. Baird’s mechanical system reached a peak of 240 lines of resolution on BBC telecasts in 1936, though the mechanical system did not scan the televised scene directly. Instead, a 17.5 mm film was shot, rapidly developed, and then scanned while the film was still wet. [citation needed]

    A U.S. inventor, Charles Francis Jenkins, also pioneered the television. He published an article on “Motion Pictures by Wireless” in 1913, transmitted moving silhouette images for witnesses in December 1923, and on 13 June 1925, publicly demonstrated synchronized transmission of silhouette pictures. In 1925, Jenkins used the Nipkow disk and transmitted the silhouette image of a toy windmill in motion over a distance of 5 miles (8 km), from a naval radio station in Maryland to his laboratory in Washington, D.C., using a lensed disk scanner with a 48-line resolution.[25][26] He was granted U.S. Patent No. 1,544,156 (Transmitting Pictures over Wireless) on 30 June 1925 (filed 13 March 1922).[27]

    Herbert E. Ives and Frank Gray of Bell Telephone Laboratories gave a dramatic demonstration of mechanical television on 7 April 1927. Their reflected-light television system included both small and large viewing screens. The small receiver had a 2-inch-wide by 2.5-inch-high screen (5 by 6 cm). The large receiver had a screen 24 inches wide by 30 inches high (60 by 75 cm). Both sets could reproduce reasonably accurate, monochromatic, moving images. Along with the pictures, the sets received synchronized sound. The system transmitted images over two paths: first, a copper wire link from Washington to New York City, then a radio link from Whippany, New Jersey. Comparing the two transmission methods, viewers noted no difference in quality. Subjects of the telecast included Secretary of Commerce Herbert Hoover. A flying-spot scanner beam illuminated these subjects. The scanner that produced the beam had a 50-aperture disk. The disc revolved at a rate of 18 frames per second, capturing one frame about every 56 milliseconds. (Today’s systems typically transmit 30 or 60 frames per second, or one frame every 33.3 or 16.7 milliseconds, respectively.) Television historian Albert Abramson underscored the significance of the Bell Labs demonstration: “It was, in fact, the best demonstration of a mechanical television system ever made to this time. It would be several years before any other system could even begin to compare with it in picture quality.”[28]

    In 1928, WRGB, then W2XB, was started as the world’s first television station. It broadcast from the General Electric facility in Schenectady, NY. It was popularly known as “WGY Television.” Meanwhile, in the Soviet UnionLeon Theremin had been developing a mirror drum-based television, starting with 16 lines resolution in 1925, then 32 lines, and eventually 64 using interlacing in 1926. As part of his thesis, on 7 May 1926, he electrically transmitted and then projected near-simultaneous moving images on a 5-square-foot (0.46 m2) screen.[26]

    By 1927 Theremin had achieved an image of 100 lines, a resolution that was not surpassed until May 1932 by RCA, with 120 lines.[29]

    On 25 December 1926, Kenjiro Takayanagi demonstrated a television system with a 40-line resolution that employed a Nipkow disk scanner and CRT display at Hamamatsu Industrial High School in Japan. This prototype is still on display at the Takayanagi Memorial Museum in Shizuoka University, Hamamatsu Campus. His research in creating a production model was halted by the SCAP after World War II.[30]

    Because only a limited number of holes could be made in the disks, and disks beyond a certain diameter became impractical, image resolution on mechanical television broadcasts was relatively low, ranging from about 30 lines up to 120 or so. Nevertheless, the image quality of 30-line transmissions steadily improved with technical advances, and by 1933 the UK broadcasts using the Baird system were remarkably clear.[31] A few systems ranging into the 200-line region also went on the air. Two of these were the 180-line system that Compagnie des Compteurs (CDC) installed in Paris in 1935 and the 180-line system that Peck Television Corp. started in 1935 at station VE9AK in Montreal.[32][33] The advancement of all-electronic television (including image dissectors and other camera tubes and cathode-ray tubes for the reproducer) marked the start of the end for mechanical systems as the dominant form of television. Mechanical television, despite its inferior image quality and generally smaller picture, would remain the primary television technology until the 1930s. The last mechanical telecasts ended in 1939 at stations run by a lot of public universities in the United States.

    Electronic

    Further information: Video camera tube

    Ferdinand Braun

    In 1897, English physicist J. J. Thomson was able, in his three well-known experiments, to deflect cathode rays, a fundamental function of the modern cathode-ray tube (CRT). The earliest version of the CRT was invented by the German physicist Ferdinand Braun in 1897 and is also known as the “Braun” tube.[34] It was a cold-cathode diode, a modification of the Crookes tube, with a phosphor-coated screen. Braun was the first to conceive the use of a CRT as a display device.[35] The Braun tube became the foundation of 20th century television.[36] In 1906 the Germans Max Dieckmann and Gustav Glage produced raster images for the first time in a CRT.[37] In 1907, Russian scientist Boris Rosing used a CRT in the receiving end of an experimental video signal to form a picture. He managed to display simple geometric shapes onto the screen.[38]

    In 1908, Alan Archibald Campbell-Swinton, a fellow of the Royal Society (UK), published a letter in the scientific journal Nature in which he described how “distant electric vision” could be achieved by using a cathode-ray tube, or Braun tube, as both a transmitting and receiving device,[39][40] he expanded on his vision in a speech given in London in 1911 and reported in The Times[41] and the Journal of the Röntgen Society.[42][43] In a letter to Nature published in October 1926, Campbell-Swinton also announced the results of some “not very successful experiments” he had conducted with G. M. Minchin and J. C. M. Stanton. They had attempted to generate an electrical signal by projecting an image onto a selenium-coated metal plate that was simultaneously scanned by a cathode ray beam.[44][45] These experiments were conducted before March 1914, when Minchin died,[46] but they were later repeated by two different teams in 1937, by H. Miller and J. W. Strange from EMI,[47] and by H. Iams and A. Rose from RCA.[48] Both teams successfully transmitted “very faint” images with the original Campbell-Swinton’s selenium-coated plate. Although others had experimented with using a cathode-ray tube as a receiver, the concept of using one as a transmitter was novel.[49] The first cathode-ray tube to use a hot cathode was developed by John B. Johnson (who gave his name to the term Johnson noise) and Harry Weiner Weinhart of Western Electric, and became a commercial product in 1922.[citation needed]

    In 1926, Hungarian engineer Kálmán Tihanyi designed a television system using fully electronic scanning and display elements and employing the principle of “charge storage” within the scanning (or “camera”) tube.[50][51][52][53] The problem of low sensitivity to light resulting in low electrical output from transmitting or “camera” tubes would be solved with the introduction of charge-storage technology by Kálmán Tihanyi beginning in 1924.[54] His solution was a camera tube that accumulated and stored electrical charges (“photoelectrons”) within the tube throughout each scanning cycle. The device was first described in a patent application he filed in Hungary in March 1926 for a television system he called “Radioskop”.[55] After further refinements included in a 1928 patent application,[54] Tihanyi’s patent was declared void in Great Britain in 1930,[56] so he applied for patents in the United States. Although his breakthrough would be incorporated into the design of RCA‘s “iconoscope” in 1931, the U.S. patent for Tihanyi’s transmitting tube would not be granted until May 1939. The patent for his receiving tube had been granted the previous October. Both patents had been purchased by RCA prior to their approval.[57][58] Charge storage remains a basic principle in the design of imaging devices for television to the present day.[55] On 25 December 1926, at Hamamatsu Industrial High School in Japan, Japanese inventor Kenjiro Takayanagi demonstrated a TV system with a 40-line resolution that employed a CRT display.[30] This was the first working example of a fully electronic television receiver and Takayanagi’s team later made improvements to this system parallel to other television developments.[59] Takayanagi did not apply for a patent.[60]

    In the 1930s, Allen B. DuMont made the first CRTs to last 1,000 hours of use, one of the factors that led to the widespread adoption of television.[61]

    On 7 September 1927, U.S. inventor Philo Farnsworth‘s image dissector camera tube transmitted its first image, a simple straight line, at his laboratory at 202 Green Street in San Francisco.[62][63] By 3 September 1928, Farnsworth had developed the system sufficiently to hold a demonstration for the press. This is widely regarded as the first electronic television demonstration.[63] In 1929, the system was improved further by eliminating a motor generator so that his television system had no mechanical parts.[64] That year, Farnsworth transmitted the first live human images with his system, including a three and a half-inch image of his wife Elma (“Pem”) with her eyes closed (possibly due to the bright lighting required).[65]

    Vladimir Zworykin demonstrates electronic television (1929).

    Meanwhile, Vladimir Zworykin also experimented with the cathode-ray tube to create and show images. While working for Westinghouse Electric in 1923, he began to develop an electronic camera tube. However, in a 1925 demonstration, the image was dim, had low contrast and poor definition, and was stationary.[66] Zworykin’s imaging tube never got beyond the laboratory stage. However, RCA, which acquired the Westinghouse patent, asserted that the patent for Farnsworth’s 1927 image dissector was written so broadly that it would exclude any other electronic imaging device. Thus, based on Zworykin’s 1923 patent application, RCA filed a patent interference suit against Farnsworth. The U.S. Patent Office examiner disagreed in a 1935 decision, finding priority of invention for Farnsworth against Zworykin. Farnsworth claimed that Zworykin’s 1923 system could not produce an electrical image of the type to challenge his patent. Zworykin received a patent in 1928 for a color transmission version of his 1923 patent application.[67] He also divided his original application in 1931.[68] Zworykin was unable or unwilling to introduce evidence of a working model of his tube that was based on his 1923 patent application. In September 1939, after losing an appeal in the courts and being determined to go forward with the commercial manufacturing of television equipment, RCA agreed to pay Farnsworth US$1 million over ten years, in addition to license payments, to use his patents.[69][70]

    In 1933, RCA introduced an improved camera tube that relied on Tihanyi’s charge storage principle.[71] Called the “Iconoscope” by Zworykin, the new tube had a light sensitivity of about 75,000 lux, and thus was claimed to be much more sensitive than Farnsworth’s image dissector.[citation needed] However, Farnsworth had overcome his power issues with his Image Dissector through the invention of a completely unique “Multipactor” device that he began work on in 1930, and demonstrated in 1931.[72][73] This small tube could amplify a signal reportedly to the 60th power or better[74] and showed great promise in all fields of electronics. Unfortunately, an issue with the multipactor was that it wore out at an unsatisfactory rate.[75]

    Manfred von Ardenne in 1933

    At the Berlin Radio Show in August 1931 in BerlinManfred von Ardenne gave a public demonstration of a television system using a CRT for both transmission and reception, the first completely electronic television transmission.[76] However, Ardenne had not developed a camera tube, using the CRT instead as a flying-spot scanner to scan slides and film.[77] Ardenne achieved his first transmission of television pictures on 24 December 1933, followed by test runs for a public television service in 1934. The world’s first electronically scanned television service then started in Berlin in 1935, the Fernsehsender Paul Nipkow, culminating in the live broadcast of the 1936 Summer Olympic Games from Berlin to public places all over Germany.[78][79]

    Philo Farnsworth gave the world’s first public demonstration of an all-electronic television system, using a live camera, at the Franklin Institute of Philadelphia on 25 August 1934 and for ten days afterward.[80][81] Mexican inventor Guillermo González Camarena also played an important role in early television. His experiments with television (known as telectroescopía at first) began in 1931 and led to a patent for the “trichromatic field sequential system” color television in 1940.[82] In Britain, the EMI engineering team led by Isaac Shoenberg applied in 1932 for a patent for a new device they called “the Emitron”,[83][84] which formed the heart of the cameras they designed for the BBC. On 2 November 1936, a 405-line broadcasting service employing the Emitron began at studios in Alexandra Palace and transmitted from a specially built mast atop one of the Victorian building’s towers. It alternated briefly with Baird’s mechanical system in adjoining studios but was more reliable and visibly superior. This was the world’s first regular “high-definition” television service.[85]

    The original U.S. iconoscope was noisy, had a high ratio of interference to signal, and ultimately gave disappointing results, especially compared to the high-definition mechanical scanning systems that became available.[86][87] The EMI team, under the supervision of Isaac Shoenberg, analyzed how the iconoscope (or Emitron) produced an electronic signal and concluded that its real efficiency was only about 5% of the theoretical maximum.[88][89] They solved this problem by developing and patenting in 1934 two new camera tubes dubbed super-Emitron and CPS Emitron.[90][91][92] The super-Emitron was between ten and fifteen times more sensitive than the original Emitron and iconoscope tubes, and, in some cases, this ratio was considerably greater.[88] It was used for outside broadcasting by the BBC, for the first time, on Armistice Day 1937, when the general public could watch on a television set as the King laid a wreath at the Cenotaph.[93] This was the first time that anyone had broadcast a live street scene from cameras installed on the roof of neighboring buildings because neither Farnsworth nor RCA would do the same until the 1939 New York World’s Fair.

    Ad for the beginning of experimental television broadcasting in New York City by RCA in 1939
    Indian-head test pattern used during the black-and-white era before 1970. It was displayed when a television station first signed on every day.

    On the other hand, in 1934, Zworykin shared some patent rights with the German licensee company Telefunken.[94] The “image iconoscope” (“Superikonoskop” in Germany) was produced as a result of the collaboration. This tube is essentially identical to the super-Emitron.[citation needed] The production and commercialization of the super-Emitron and image iconoscope in Europe were not affected by the patent war between Zworykin and Farnsworth because Dieckmann and Hell had priority in Germany for the invention of the image dissector, having submitted a patent application for their Lichtelektrische Bildzerlegerröhre für Fernseher (Photoelectric Image Dissector Tube for Television) in Germany in 1925,[95] two years before Farnsworth did the same in the United States.[96] The image iconoscope (Superikonoskop) became the industrial standard for public broadcasting in Europe from 1936 until 1960, when it was replaced by the vidicon and plumbicon tubes. Indeed, it represented the European tradition in electronic tubes competing against the American tradition represented by the image orthicon.[97][98] The German company Heimann produced the Superikonoskop for the 1936 Berlin Olympic Games,[99][100] later Heimann also produced and commercialized it from 1940 to 1955;[101] finally the Dutch company Philips produced and commercialized the image iconoscope and multicon from 1952 to 1958.[98][102]

    U.S. television broadcasting, at the time, consisted of a variety of markets in a wide range of sizes, each competing for programming and dominance with separate technology until deals were made and standards agreed upon in 1941.[103] RCA, for example, used only Iconoscopes in the New York area, but Farnsworth Image Dissectors in Philadelphia and San Francisco.[104] In September 1939, RCA agreed to pay the Farnsworth Television and Radio Corporation royalties over the next ten years for access to Farnsworth’s patents.[105] With this historic agreement in place, RCA integrated much of what was best about the Farnsworth Technology into their systems.[104] In 1941, the United States implemented 525-line television.[106][107] Electrical engineer Benjamin Adler played a prominent role in the development of television.[108][109]

    The world’s first 625-line television standard was designed in the Soviet Union in 1944 and became a national standard in 1946.[110] The first broadcast in 625-line standard occurred in Moscow in 1948.[111] The concept of 625 lines per frame was subsequently implemented in the European CCIR standard.[112] In 1936, Kálmán Tihanyi described the principle of plasma display, the first flat-panel display system.[113][114]

    Early electronic television sets were large and bulky, with analog circuits made of vacuum tubes. Following the invention of the first working transistor at Bell LabsSony founder Masaru Ibuka predicted in 1952 that the transition to electronic circuits made of transistors would lead to smaller and more portable television sets.[115] The first fully transistorized, portable solid-state television set was the 8-inch Sony TV8-301, developed in 1959 and released in 1960.[116][117] This began the transformation of television viewership from a communal viewing experience to a solitary viewing experience.[118] By 1960, Sony had sold over 4 million portable television sets worldwide.[119]

    Color

    Main article: Color television

    Samsung LED TV

    The basic idea of using three monochrome images to produce a color image had been experimented with almost as soon as black-and-white televisions had first been built. Although he gave no practical details, among the earliest published proposals for television was one by Maurice Le Blanc in 1880 for a color system, including the first mentions in television literature of line and frame scanning.[120] Polish inventor Jan Szczepanik patented a color television system in 1897, using a selenium photoelectric cell at the transmitter and an electromagnet controlling an oscillating mirror and a moving prism at the receiver. But his system contained no means of analyzing the spectrum of colors at the transmitting end and could not have worked as he described it.[121] Another inventor, Hovannes Adamian, also experimented with color television as early as 1907. The first color television project is claimed by him,[122] and was patented in Germany on 31 March 1908, patent No. 197183, then in Britain, on 1 April 1908, patent No. 7219,[123] in France (patent No. 390326) and in Russia in 1910 (patent No. 17912).[124]

    Scottish inventor John Logie Baird demonstrated the world’s first color transmission on 3 July 1928, using scanning discs at the transmitting and receiving ends with three spirals of apertures, each spiral with filters of a different primary color, and three light sources at the receiving end, with a commutator to alternate their illumination.[125] Baird also made the world’s first color broadcast on 4 February 1938, sending a mechanically scanned 120-line image from Baird’s Crystal Palace studios to a projection screen at London’s Dominion Theatre.[126] Mechanically scanned color television was also demonstrated by Bell Laboratories in June 1929 using three complete systems of photoelectric cells, amplifiers, glow-tubes, and color filters, with a series of mirrors to superimpose the red, green, and blue images into one full-color image.

    The first practical hybrid system was again pioneered by John Logie Baird. In 1940 he publicly demonstrated a color television combining a traditional black-and-white display with a rotating colored disk. This device was very “deep” but was later improved with a mirror folding the light path into an entirely practical device resembling a large conventional console.[127] However, Baird was unhappy with the design, and, as early as 1944, had commented to a British government committee that a fully electronic device would be better.

    In 1939, Hungarian engineer Peter Carl Goldmark introduced an electro-mechanical system while at CBS, which contained an Iconoscope sensor. The CBS field-sequential color system was partly mechanical, with a disc made of red, blue, and green filters spinning inside the television camera at 1,200 rpm and a similar disc spinning in synchronization in front of the cathode-ray tube inside the receiver set.[128] The system was first demonstrated to the Federal Communications Commission (FCC) on 29 August 1940 and shown to the press on 4 September.[129][130][131][132]

    CBS began experimental color field tests using film as early as 28 August 1940 and live cameras by 12 November.[130][133] NBC (owned by RCA) made its first field test of color television on 20 February 1941. CBS began daily color field tests on 1 June 1941.[134] These color systems were not compatible with existing black-and-white television sets, and, as no color television sets were available to the public at this time, viewing of the color field tests was restricted to RCA and CBS engineers and the invited press. The War Production Board halted the manufacture of television and radio equipment for civilian use from 22 April 1942 to 20 August 1945, limiting any opportunity to introduce color television to the general public.[135][136]

    As early as 1940, Baird had started work on a fully electronic system he called Telechrome. Early Telechrome devices used two electron guns aimed at either side of a phosphor plate. The phosphor was patterned so the electrons from the guns only fell on one side of the patterning or the other. Using cyan and magenta phosphors, a reasonable limited-color image could be obtained. He also demonstrated the same system using monochrome signals to produce a 3D image (called “stereoscopic” at the time). A demonstration on 16 August 1944 was the first example of a practical color television system. Work on the Telechrome continued, and plans were made to introduce a three-gun version for full color. However, Baird’s untimely death in 1946 ended the development of the Telechrome system.[137][138] Similar concepts were common through the 1940s and 1950s, differing primarily in the way they re-combined the colors generated by the three guns. The Geer tube was similar to Baird’s concept but used small pyramids with the phosphors deposited on their outside faces instead of Baird’s 3D patterning on a flat surface. The Penetron used three layers of phosphor on top of each other and increased the power of the beam to reach the upper layers when drawing those colors. The Chromatron used a set of focusing wires to select the colored phosphors arranged in vertical stripes on the tube.

    One of the great technical challenges of introducing color broadcast television was the desire to conserve bandwidth, potentially three times that of the existing black-and-white standards, and not use an excessive amount of radio spectrum. In the United States, after considerable research, the National Television Systems Committee[139] approved an all-electronic system developed by RCA, which encoded the color information separately from the brightness information and significantly reduced the resolution of the color information to conserve bandwidth. As black-and-white televisions could receive the same transmission and display it in black-and-white, the color system adopted is [backwards] “compatible.” (“Compatible Color,” featured in RCA advertisements of the period, is mentioned in the song “America,” of West Side Story, 1957.) The brightness image remained compatible with existing black-and-white television sets at slightly reduced resolution. In contrast, color televisions could decode the extra information in the signal and produce a limited-resolution color display. The higher-resolution black-and-white and lower-resolution color images combine in the brain to produce a seemingly high-resolution color image. The NTSC standard represented a significant technical achievement.

    Color bars used in a test pattern, sometimes used when no program material is available

    The first color broadcast (the first episode of the live program The Marriage) occurred on 8 July 1954. However, during the following ten years, most network broadcasts and nearly all local programming continued to be black-and-white. It was not until the mid-1960s that color sets started selling in large numbers, due in part to the color transition of 1965, in which it was announced that over half of all network prime-time programming would be broadcast in color that fall. The first all-color prime-time season came just one year later. In 1972, the last holdout among daytime network programs converted to color, resulting in the first completely all-color network season.

    Early color sets were either floor-standing console models or tabletop versions nearly as bulky and heavy, so in practice they remained firmly anchored in one place. GE‘s relatively compact and lightweight Porta-Color set was introduced in the spring of 1966. It used a transistor-based UHF tuner.[140] The first fully transistorized color television in the United States was the Quasar television introduced in 1967.[141] These developments made watching color television a more flexible and convenient proposition.

    In 1972, sales of color sets finally surpassed sales of black-and-white sets. Color broadcasting in Europe was not standardized on the PAL format until the 1960s, and broadcasts did not start until 1967. By this point, many of the technical issues in the early sets had been worked out, and the spread of color sets in Europe was fairly rapid. By the mid-1970s, the only stations broadcasting in black-and-white were a few high-numbered UHF stations in small markets and a handful of low-power repeater stations in even smaller markets such as vacation spots. By 1979, even the last of these had converted to color. By the early 1980s, B&W sets had been pushed into niche markets, notably low-power uses, small portable sets, or for use as video monitor screens in lower-cost consumer equipment. By the late 1980s, even these last holdout niche B&W environments had inevitably shifted to color sets.

    Digital

    Main article: Digital television

    See also: Digital television transition

    Digital television (DTV) is the transmission of audio and video by digitally processed and multiplexed signals, in contrast to the analog and channel-separated signals used by analog television. Due to data compression, digital television can support more than one program in the same channel bandwidth.[142] It is an innovative service that represents the most significant evolution in television broadcast technology since color television emerged in the 1950s.[143] Digital television’s roots have been tied very closely to the availability of inexpensive, high performance computers. It was not until the 1990s that digital television became possible.[144] Digital television was previously not practically possible due to the impractically high bandwidth requirements of uncompressed digital video,[145][146] requiring around 200 Mbit/s for a standard-definition television (SDTV) signal,[145] and over 1 Gbit/s for high-definition television (HDTV).[146]

    A digital television service was proposed in 1986 by Nippon Telegraph and Telephone (NTT) and the Ministry of Posts and Telecommunication (MPT) in Japan, where there were plans to develop an “Integrated Network System” service. However, it was not possible to implement such a digital television service practically until the adoption of DCT video compression technology made it possible in the early 1990s.[145]

    In the mid-1980s, as Japanese consumer electronics firms forged ahead with the development of HDTV technology, the MUSE analog format proposed by NHK, a Japanese company, was seen as a pacesetter that threatened to eclipse U.S. electronics companies’ technologies. Until June 1990, the Japanese MUSE standard, based on an analog system, was the front-runner among the more than 23 other technical concepts under consideration. Then, a U.S. company, General Instrument, demonstrated the possibility of a digital television signal. This breakthrough was of such significance that the FCC was persuaded to delay its decision on an ATV standard until a digitally-based standard could be developed.

    In March 1990, when it became clear that a digital standard was possible, the FCC made several critical decisions. First, the Commission declared that the new ATV standard must be more than an enhanced analog signal but be able to provide a genuine HDTV signal with at least twice the resolution of existing television images. (7) Then, to ensure that viewers who did not wish to buy a new digital television set could continue to receive conventional television broadcasts, it dictated that the new ATV standard must be capable of being “simulcast” on different channels. (8) The new ATV standard also allowed the new DTV signal to be based on entirely new design principles. Although incompatible with the existing NTSC standard, the new DTV standard would be able to incorporate many improvements.

    The last standards adopted by the FCC did not require a single standard for scanning formats, aspect ratios, or lines of resolution. This compromise resulted from a dispute between the consumer electronics industry (joined by some broadcasters) and the computer industry (joined by the film industry and some public interest groups) over which of the two scanning processes—interlaced or progressive—would be best suited for the newer digital HDTV compatible display devices.[147] Interlaced scanning, which had been specifically designed for older analog CRT display technologies, scans even-numbered lines first, then odd-numbered ones. Interlaced scanning can be regarded as the first video compression model. It was partly developed in the 1940s to double the image resolution to exceed the limitations of television broadcast bandwidth. Another reason for its adoption was to limit the flickering on early CRT screens, whose phosphor-coated screens could only retain the image from the electron scanning gun for a relatively short duration.[148] However, interlaced scanning does not work as efficiently on newer display devices such as Liquid-crystal (LCD), for example, which are better suited to a more frequent progressive refresh rate.[147]

    Progressive scanning, the format that the computer industry had long adopted for computer display monitors, scans every line in sequence, from top to bottom. Progressive scanning, in effect, doubles the amount of data generated for every full screen displayed in comparison to interlaced scanning by painting the screen in one pass in 1/60-second instead of two passes in 1/30-second. The computer industry argued that progressive scanning is superior because it does not “flicker” on the new standard of display devices in the manner of interlaced scanning. It also argued that progressive scanning enables easier connections with the Internet and is more cheaply converted to interlaced formats than vice versa. The film industry also supported progressive scanning because it offered a more efficient means of converting filmed programming into digital formats. For their part, the consumer electronics industry and broadcasters argued that interlaced scanning was the only technology that could transmit the highest quality pictures then (and currently) feasible, i.e., 1,080 lines per picture and 1,920 pixels per line. Broadcasters also favored interlaced scanning because their vast archive of interlaced programming is not readily compatible with a progressive format. William F. Schreiber, who was director of the Advanced Television Research Program at the Massachusetts Institute of Technology from 1983 until his retirement in 1990, thought that the continued advocacy of interlaced equipment originated from consumer electronics companies that were trying to get back the substantial investments they made in the interlaced technology.[149]

    Digital television transition started in late 2000s. All governments across the world set the deadline for analog shutdown by the 2010s. Initially, the adoption rate was low, as the first digital tuner-equipped television sets were costly. However, as the price of digital-capable television sets dropped, more and more households started converting to digital television sets. The transition is expected to be completed worldwide by the mid to late 2010s.

    Smart television

    Main article: Smart television

    Not to be confused with Internet televisionInternet Protocol television, or Web television.

    A smart TV

    The advent of digital television allowed innovations like smart television sets. A smart television sometimes referred to as a “connected TV” or “hybrid TV,” is a television set or set-top box with integrated Internet and Web 2.0 features and is an example of technological convergence between computers, television sets, and set-top boxes. Besides the traditional functions of television sets and set-top boxes provided through traditional Broadcasting media, these devices can also provide Internet TV, online interactive mediaover-the-top content, as well as on-demand streaming media, and home networking access. These TVs come pre-loaded with an operating system.[10][150][151][152]

    Smart TV is not to be confused with Internet TVInternet Protocol television (IPTV), or with Web TVInternet television refers to receiving television content over the Internet instead of through traditional systems—terrestrial, cable, and satellite. IPTV is one of the emerging Internet television technology standards for television networks. Web television (WebTV) is a term used for programs created by a wide variety of companies and individuals for broadcast on Internet TV. A first patent was filed in 1994[153] (and extended the following year)[154] for an “intelligent” television system, linked with data processing systems, using a digital or analog network. Apart from being linked to data networks, one key point is its ability to automatically download necessary software routines according to a user’s demand and process their needs. Major TV manufacturers announced the production of smart TVs only for middle-end and high-end TVs in 2015.[7][8][9] Smart TVs have gotten more affordable compared to when they were first introduced, with 46 million U.S. households having at least one as of 2019.[155]

    3D

    Main article: 3D television

    3D television conveys depth perception to the viewer by employing techniques such as stereoscopic display, multi-view display, 2D-plus-depth, or any other form of 3D display. Most modern 3D television sets use an active shutter 3D system or a polarized 3D system, and some are autostereoscopic without the need for glasses. Stereoscopic 3D television was demonstrated for the first time on 10 August 1928, by John Logie Baird in his company’s premises at 133 Long Acre, London.[156] Baird pioneered a variety of 3D television systems using electromechanical and cathode-ray tube techniques. The first 3D television was produced in 1935. The advent of digital television in the 2000s greatly improved 3D television sets. Although 3D television sets are quite popular for watching 3D home media, such as on Blu-ray discs, 3D programming has largely failed to make inroads with the public. As a result, many 3D television channels that started in the early 2010s were shut down by the mid-2010s. According to DisplaySearch 3D television shipments totaled 41.45 million units in 2012, compared with 24.14 in 2011 and 2.26 in 2010.[157] As of late 2013, the number of 3D TV viewers started to decline.[158][159][160][161][162]

    Broadcast systems

    Terrestrial television

    Main article: Terrestrial television

    See also: Timeline of the introduction of television in countries

    A modern high gain UHF Yagi television antenna. It has 17 directors and one reflector (made of 4 rods) shaped as a corner reflector.

    Programming is broadcast by television stations, sometimes called “channels,” as stations are licensed by their governments to broadcast only over assigned channels in the television band. At first, terrestrial broadcasting was the only way television could be widely distributed, and because bandwidth was limited, i.e., there were only a small number of channels available, government regulation was the norm. In the U.S., the Federal Communications Commission (FCC) allowed stations to broadcast advertisements beginning in July 1941 but required public service programming commitments as a requirement for a license. By contrast, the United Kingdom chose a different route, imposing a television license fee on owners of television reception equipment to fund the British Broadcasting Corporation (BBC), which had public service as part of its Royal Charter.

    WRGB claims to be the world’s oldest television station, tracing its roots to an experimental station founded on 13 January 1928, broadcasting from the General Electric factory in Schenectady, NY, under the call letters W2XB.[163] It was popularly known as “WGY Television” after its sister radio station. Later, in 1928, General Electric started a second facility, this one in New York City, which had the call letters W2XBS and which today is known as WNBC. The two stations were experimental and had no regular programming, as receivers were operated by engineers within the company. The image of a Felix the Cat doll rotating on a turntable was broadcast for 2 hours every day for several years as engineers tested new technology. On 2 November 1936, the BBC began transmitting the world’s first public regular high-definition service from the Victorian Alexandra Palace in north London.[164] It therefore claims to be the birthplace of television broadcasting as we now know it.

    With the widespread adoption of cable across the United States in the 1970s and 1980s, terrestrial television broadcasts have been in decline; in 2013 it was estimated that about 7% of US households used an antenna.[165][166] A slight increase in use began around 2010 due to switchover to digital terrestrial television broadcasts, which offered pristine image quality over very large areas, and offered an alternative to cable television (CATV) for cord cutters. All other countries around the world are also in the process of either shutting down analog terrestrial television or switching over to digital terrestrial television.

    Cable television

    Main article: Cable television

    See also: Cable television by region

    Coaxial cable is used to carry cable television signals into cathode-ray tube and flat-panel television sets.

    Cable television is a system of broadcasting television programming to paying subscribers via radio frequency (RF) signals transmitted through coaxial cables or light pulses through fiber-optic cables. This contrasts with traditional terrestrial television, in which the television signal is transmitted over the air by radio waves and received by a television antenna attached to the television. In the 2000s, FM radio programming, high-speed Internet, telephone service, and similar non-television services may also be provided through these cables. The abbreviation CATV is sometimes used for cable television in the United States. It originally stood for Community Access Television or Community Antenna Television, from cable television’s origins in 1948: in areas where over-the-air reception was limited by distance from transmitters or mountainous terrain, large “community antennas” were constructed, and cable was run from them to individual homes.[167]

    Satellite television

    Main article: Satellite television

    DBS satellite dishes installed on an apartment complex

    Satellite television is a system of supplying television programming using broadcast signals relayed from communication satellites. The signals are received via an outdoor parabolic reflector antenna, usually referred to as a satellite dish and a low-noise block downconverter (LNB). A satellite receiver then decodes the desired television program for viewing on a television set. Receivers can be external set-top boxes, or a built-in television tuner. Satellite television provides a wide range of channels and services, especially to geographic areas without terrestrial television or cable television.

    The most common method of reception is direct-broadcast satellite television (DBSTV), also known as “direct to home” (DTH).[168] In DBSTV systems, signals are relayed from a direct broadcast satellite on the Ku wavelength and are completely digital.[169] Satellite TV systems formerly used systems known as television receive-only. These systems received analog signals transmitted in the C-band spectrum from FSS type satellites and required the use of large dishes. Consequently, these systems were nicknamed “big dish” systems and were more expensive and less popular.[170]

    The direct-broadcast satellite television signals were earlier analog signals and later digital signals, both of which require a compatible receiver. Digital signals may include high-definition television (HDTV). Some transmissions and channels are free-to-air or free-to-view, while many other channels are pay television requiring a subscription.[171] In 1945, British science fiction writer Arthur C. Clarke proposed a worldwide communications system that would function by means of three satellites equally spaced apart in Earth orbit.[172][173] This was published in the October 1945 issue of the Wireless World magazine and won him the Franklin Institute‘s Stuart Ballantine Medal in 1963.[174][175]

    The first satellite television signals from Europe to North America were relayed via the Telstar satellite over the Atlantic Ocean on 23 July 1962.[176] The signals were received and broadcast in North American and European countries and watched by over 100 million.[176] Launched in 1962, the Relay 1 satellite was the first satellite to transmit television signals from the US to Japan.[177] The first geosynchronous communication satelliteSyncom 2, was launched on 26 July 1963.[178]

    The world’s first commercial communications satellite, called Intelsat I and nicknamed “Early Bird”, was launched into geosynchronous orbit on 6 April 1965.[179] The first national network of television satellites, called Orbita, was created by the Soviet Union in October 1967, and was based on the principle of using the highly elliptical Molniya satellite for rebroadcasting and delivering of television signals to ground downlink stations.[180] The first commercial North American satellite to carry television transmissions was Canada’s geostationary Anik 1, which was launched on 9 November 1972.[181] ATS-6, the world’s first experimental educational and Direct Broadcast Satellite (DBS), was launched on 30 May 1974.[182] It transmitted at 860 MHz using wideband FM modulation and had two sound channels. The transmissions were focused on the Indian subcontinent, but experimenters were able to receive the signal in Western Europe using home-constructed equipment that drew on UHF television design techniques already in use.[183]

    The first in a series of Soviet geostationary satellites to carry Direct-To-Home television, Ekran 1, was launched on 26 October 1976.[184] It used a 714 MHz UHF downlink frequency so that the transmissions could be received with existing UHF television technology rather than microwave technology.[185]

    Internet television

    Main article: Streaming television

    Not to be confused with Smart televisionInternet Protocol television, or Web television.

    Internet television (Internet TV) (or online television) is the digital distribution of television content via the Internet as opposed to traditional systems like terrestrial, cable, and satellite, although the Internet itself is received by terrestrial, cable, or satellite methods. Internet television is a general term that covers the delivery of television series and other video content over the Internet by video streaming technology, typically by major traditional television broadcasters. Internet television should not be confused with Smart TVIPTV, or with Web TVSmart television refers to the television set which has a built-in operating system. Internet Protocol television (IPTV) is one of the emerging Internet television technology standards for use by television networks. Web television is a term used for programs created by a wide variety of companies and individuals for broadcast on Internet television.

    Traditional cable and satellite television providers began to offer services such as Sling TV, owned by Dish Network, which was unveiled in January 2015.[186] DirecTV, another satellite television provider, launched their own streaming service, DirecTV Stream, in 2016.[187][188] Sky launched a similar streaming service in the UK called Now. In 2013, Video on demand website Netflix earned the first Primetime Emmy Award nominations for original streaming television at the 65th Primetime Emmy Awards. Three of its series, House of CardsArrested Development, and Hemlock Grove, earned nominations that year.[189] On July 13, 2015, cable company Comcast announced an HBO plus broadcast TV package at a price discounted from basic broadband plus basic cable.[190]

    In 2017, YouTube launched YouTube TV, a streaming service that allows users to watch live television programs from popular cable or network channels and record shows to stream anywhere, anytime.[191][192][193] As of 2017, 28% of US adults cite streaming services as their main means for watching television, and 61% of those ages 18 to 29 cite it as their main method.[194][195] As of 2018, Netflix is the world’s largest streaming TV network and also the world’s largest Internet media and entertainment company with 117 million paid subscribers, and by revenue and market cap.[196][197] In 2020, the COVID-19 pandemic had a strong impact in the television streaming business with the lifestyle changes such as staying at home and lockdowns.[198][199][200][201]

    Sets

    Main article: Television set

    RCA 630-TS, the first mass-produced television set, which sold in 1946–1947

    A television set, also called a television receiver, television, TV set, TV, or “telly,” is a device that combines a tuner, display, amplifier, and speakers for the purpose of viewing television and hearing its audio components. Introduced in the late 1920s in mechanical form, television sets became a popular consumer product after World War II in electronic form, using cathode-ray tubes. The addition of color to broadcast television after 1953 further increased the popularity of television sets, and an outdoor antenna became a common feature of suburban homes. The ubiquitous television set became the display device for recorded media in the 1970s, such as Betamax and VHS, which enabled viewers to record TV shows and watch prerecorded movies. In the subsequent decades, Television sets were used to watch DVDs and Blu-ray Discs of movies and other content. Major TV manufacturers announced the discontinuation of CRT, DLP, plasma, and fluorescent-backlit LCDs by the mid-2010s. Televisions since 2010s mostly use LEDs.[4][5][202][203] LEDs are expected to be gradually replaced by OLEDs in the near future.[6]

    Display technologies

    Main article: Display device

    Disk

    Main article: Nipkow disk

    The earliest systems employed a spinning disk to create and reproduce images.[204] These usually had a low resolution and screen size and never became popular with the public.

    CRT

    Main article: Cathode-ray tube

    A 14-inch cathode-ray tube showing its deflection coils and electron guns

    The cathode-ray tube (CRT) is a vacuum tube containing one or more electron guns (a source of electrons or electron emitter) and a fluorescent screen used to view images.[38] It has the means to accelerate and deflect the electron beam(s) onto the screen to create the images. The images may represent electrical waveforms (oscilloscope), pictures (television, computer monitor), radar targets or others. The CRT uses an evacuated glass envelope that is large, deep (i.e., long from front screen face to rear end), fairly heavy, and relatively fragile. As a matter of safety, the face is typically made of thick lead glass so as to be highly shatter-resistant and to block most X-ray emissions, particularly if the CRT is used in a consumer product.

    In television sets and computer monitors, the entire front area of the tube is scanned repetitively and systematically in a fixed pattern called a raster. An image is produced by controlling the intensity of each of the three electron beams, one for each additive primary color (red, green, and blue) with a video signal as a reference.[205] In all modern CRT monitors and televisions, the beams are bent by magnetic deflection, a varying magnetic field generated by coils and driven by electronic circuits around the neck of the tube, although electrostatic deflection is commonly used in oscilloscopes, a type of diagnostic instrument.[205]

    DLP

    Main article: Digital Light Processing

    The Christie Mirage 5000, a 2001 DLP projector

    Digital Light Processing (DLP) is a type of video projector technology that uses a digital micromirror device. Some DLPs have a TV tuner, which makes them a type of TV display. It was originally developed in 1987 by Dr. Larry Hornbeck of Texas Instruments. While the DLP imaging device was invented by Texas Instruments, the first DLP-based projector was introduced by Digital Projection Ltd in 1997. Digital Projection and Texas Instruments were both awarded Emmy Awards in 1998 for the invention of the DLP projector technology. DLP is used in a variety of display applications, from traditional static displays to interactive displays and also non-traditional embedded applications, including medical, security, and industrial uses. DLP technology is used in DLP front projectors (standalone projection units for classrooms and businesses primarily) but also in private homes; in these cases, the image is projected onto a projection screen. DLP is also used in DLP rear projection television sets and digital signs. It is also used in about 85% of digital cinema projection.[206]

    Plasma

    Main article: Plasma display

    plasma display panel (PDP) is a type of flat-panel display common to large television displays 30 inches (76 cm) or larger. They are called “plasma” displays because the technology uses small cells containing electrically charged ionized gases, or what are in essence chambers more commonly known as fluorescent lamps.

    LCD

    Main article: Liquid-crystal display

    A generic LCD TV, with speakers on either side of the screen

    Liquid-crystal-display televisions (LCD TVs) are television sets that use liquid-crystal display technology to produce images. LCD televisions are much thinner and lighter than cathode-ray tube (CRTs) of similar display size and are available in much larger sizes (e.g., 90-inch diagonal). When manufacturing costs fell, this combination of features made LCDs practical for television receivers. LCDs come in two types: those using cold cathode fluorescent lamps, simply called LCDs, and those using LED as backlight called LEDs.

    In 2007, LCD television sets surpassed sales of CRT-based television sets worldwide for the first time, and their sales figures relative to other technologies accelerated. LCD television sets have quickly displaced the only major competitors in the large-screen market, the Plasma display panel and rear-projection television.[207] In mid 2010s LCDs especially LEDs became, by far, the most widely produced and sold television display type.[202][203] LCDs also have disadvantages. Other technologies address these weaknesses, including OLEDsFED and SED, but as of 2014 none of these have entered widespread production.

    OLED

    Main article: OLED

    OLED TV

    An OLED (organic light-emitting diode) is a light-emitting diode (LED) in which the emissive electroluminescent layer is a film of organic compound which emits light in response to an electric current. This layer of organic semiconductor is situated between two electrodes. Generally, at least one of these electrodes is transparent. OLEDs are used to create digital displays in devices such as television screens. It is also used for computer monitors and portable systems such as mobile phoneshandheld game console, and PDAs.

    There are two main groups of OLED: those based on small molecules and those employing polymers. Adding mobile ions to an OLED creates a light-emitting electrochemical cell or LEC, which has a slightly different mode of operation. OLED displays can use either passive-matrix (PMOLED) or active-matrix (AMOLED) addressing schemes. Active-matrix OLEDs require a thin-film transistor backplane to switch each individual pixel on or off but allow for higher resolution and larger display sizes.

    An OLED display works without a backlight. Thus, it can display deep black levels and can be thinner and lighter than a liquid crystal display (LCD). In low ambient light conditions such as a dark room, an OLED screen can achieve a higher contrast ratio than an LCD, whether the LCD uses cold cathode fluorescent lamps or LED backlight. OLEDs are expected to replace other forms of display in the near future.[6]

    Display resolution

    Comparison of 8K UHDTV, 4K UHDTVHDTV and SDTV resolution

    LD

    Main article: Low-definition television

    Low-definition television or LDTV refers to television systems that have a lower screen resolution than standard-definition television systems such 240p (320*240). It is used in handheld television. The most common source of LDTV programming is the Internet, where mass distribution of higher-resolution video files could overwhelm computer servers and take too long to download. Many mobile phones and portable devices such as Apple‘s iPod Nano, or Sony’s PlayStation Portable use LDTV video, as higher-resolution files would be excessive to the needs of their small screens (320×240 and 480×272 pixels respectively). The current generation of iPod Nanos has LDTV screens, as do the first three generations of iPod Touch and iPhone (480×320). For the first years of its existence, YouTube offered only one low-definition resolution of 320x240p at 30fps or less. A standard, consumer-grade videotape can be considered SDTV due to its resolution (approximately 360 × 480i/576i).

    SD

    Main article: Standard-definition television

    Standard-definition television or SDTV refers to two different resolutions: 576i, with 576 interlaced lines of resolution, derived from the European-developed PAL and SECAM systems, and 480i based on the American National Television System Committee NTSC system. SDTV is a television system that uses a resolution that is not considered to be either high-definition television (720p1080i1080p1440p4K UHDTV, and 8K UHD) or enhanced-definition television (EDTV 480p). In North America, digital SDTV is broadcast in the same 4:3 aspect ratio as NTSC signals, with widescreen content being center cut.[208] However, in other parts of the world that used the PAL or SECAM color systems, standard-definition television is now usually shown with a 16:9 aspect ratio, with the transition occurring between the mid-1990s and mid-2000s. Older programs with a 4:3 aspect ratio are shown in the United States as 4:3, with non-ATSC countries preferring to reduce the horizontal resolution by anamorphically scaling a pillarboxed image.

    HD

    Main article: High-definition television

    High-definition television (HDTV) provides a resolution that is substantially higher than that of standard-definition television.

    HDTV may be transmitted in various formats:

    • 1080p: 1920×1080p: 2,073,600 pixels (~2.07 megapixels) per frame
    • 1080i: 1920×1080i: 1,036,800 pixels (~1.04 MP) per field or 2,073,600 pixels (~2.07 MP) per frame
      • A non-standard CEA resolution exists in some countries such as 1440×1080i: 777,600 pixels (~0.78 MP) per field or 1,555,200 pixels (~1.56 MP) per frame
    • 720p: 1280×720p: 921,600 pixels (~0.92 MP) per frame

    UHD

    Main article: Ultra-high-definition television

    Ultra-high-definition television (also known as Super Hi-Vision, Ultra HD television, UltraHD, UHDTV, or UHD) includes 4K UHD (2160p) and 8K UHD (4320p), which are two digital video formats proposed by NHK Science & Technology Research Laboratories and defined and approved by the International Telecommunication Union (ITU). The Consumer Electronics Association announced on 17 October 2012 that “Ultra High Definition,” or “Ultra HD,” would be used for displays that have an aspect ratio of at least 16:9 and at least one digital input capable of carrying and presenting natural video at a minimum resolution of 3840×2160 pixels.[209][210]

    Market share

    North American consumers purchase a new television set on average every seven years, and the average household owns 2.8 televisions. As of 2011, 48 million are sold each year at an average price of $460 and size of 38 in (97 cm).[211]

    Worldwide TV manufacturers market share, H1 2023
    ManufacturerMarket share[212]
    Samsung Electronics31.2%
    LG Electronics16.2%
    TCL10.2%
    Hisense9.5%
    Sony5.7%
    Others39%

    Content

    Programming

    See also: Television showVideo production, and Television studio

    Getting TV programming shown to the public can happen in many other ways. After production, the next step is to market and deliver the product to whichever markets are open to using it. This typically happens on two levels:

    1. Original run or First run: a producer creates a program of one or multiple episodes and shows it on a station or network that has either paid for the production itself or granted a license by the television producers to do the same.
    2. Broadcast syndication: this is the terminology rather broadly used to describe secondary programming usages (beyond the original run). It includes secondary runs in the country of the first issue, but also international usage, which may not be managed by the originating producer. In many cases, other companies, television stations, or individuals are engaged to do the syndication work, in other words, to sell the product into the markets they are allowed to sell into by contract from the copyright holders; in most cases, the producers.

    First-run programming is increasing on subscription services outside of the United States, but few domestically produced programs are syndicated on domestic free-to-air (FTA) elsewhere. This practice is increasing, however, generally on digital-only FTA channels or with subscriber-only, first-run material appearing on FTA. Unlike the United States, repeat FTA screenings of an FTA network program usually only occur on that network. Also, affiliates rarely buy or produce non-network programming that is not focused on local programming.

    Genres

    Globe icon.The examples and perspective in this section deal primarily with the United States and do not represent a worldwide view of the subject. You may improve this section, discuss the issue on the talk page, or create a new section, as appropriate. (December 2014) (Learn how and when to remove this message)

    Television genres include a broad range of programming types that entertain, inform, and educate viewers. The most expensive entertainment genres to produce are usually dramas and dramatic miniseries. However, other genres, such as historical Western genres, may also have high production costs.

    Pop culture entertainment genres include action-oriented shows such as police, crime, detective dramas, horror, or thriller shows. As well, there are also other variants of the drama genre, such as medical dramas and daytime soap operas. Sci-fi series can fall into either the drama or action category, depending on whether they emphasize philosophical questions or high adventure. Comedy is a popular genre that includes situation comedy (sitcom) and animated series for the adult demographic, such as Comedy Central’s South Park.

    The least expensive forms of entertainment programming genres are game shows, talk shows, variety shows, and reality television. Game shows feature contestants answering questions and solving puzzles to win prizes. Talk shows contain interviews with film, television, music, and sports celebrities and public figures. Variety shows feature a range of musical performers and other entertainers, such as comedians and magicians, introduced by a host or Master of Ceremonies. There is some crossover between some talk shows and variety shows because leading talk shows often feature performances by bands, singers, comedians, and other performers in between the interview segments. Reality television series “regular” people (i.e., not actors) facing unusual challenges or experiences ranging from arrest by police officers (COPS) to significant weight loss (The Biggest Loser). A derived version of reality shows depicts celebrities doing mundane activities such as going about their everyday life (The OsbournesSnoop Dogg’s Father Hood) or doing regular jobs (The Simple Life).[213]

    Fictional television programs that some television scholars and broadcasting advocacy groups argue are “quality television“, include series such as Twin Peaks and The Sopranos. Kristin Thompson argues that some of these television series exhibit traits also found in art films, such as psychological realism, narrative complexity, and ambiguous plotlines. Nonfiction television programs that some television scholars and broadcasting advocacy groups argue are “quality television” include a range of serious, noncommercial programming aimed at a niche audience, such as documentaries and public affairs shows.

    Funding

    Television sets per 1000 people of the world  1000+  100–200  500–1000  50–100  300–500  0–50  200–300  No data

    Around the world, broadcast television is financed by government, advertising, licensing (a form of tax), subscription, or any combination of these. To protect revenues, subscription television channels are usually encrypted to ensure that only subscribers receive the decryption codes to see the signal. Unencrypted channels are known as free-to-air or FTA. In 2009, the global TV market represented 1,217.2 million TV households with at least one TV and total revenues of 268.9 billion EUR (declining 1.2% compared to 2008).[214] North America had the biggest TV revenue market share with 39% followed by Europe (31%), Asia-Pacific (21%), Latin America (8%), and Africa and the Middle East (2%).[215] Globally, the different TV revenue sources are divided into 45–50% TV advertising revenues, 40–45% subscription fees, and 10% public funding.[216][217]

    Advertising

    Main article: Television advertisement

    Television’s broad reach makes it a powerful and attractive medium for advertisers. Many television networks and stations sell blocks of broadcast time to advertisers (“sponsors”) to fund their programming.[218] Television advertisements (variously called a television commercial, commercial, or ad in American English, and known in British English as an advert) is a span of television programming produced and paid for by an organization, which conveys a message, typically to market a product or service. Advertising revenue provides a significant portion of the funding for most privately owned television networks. The vast majority of television advertisements today consist of brief advertising spots, ranging in length from a few seconds to several minutes (as well as program-length infomercials). Advertisements of this sort have been used to promote a wide variety of goods, services, and ideas since the beginning of television.

    Television was still in its experimental phase in 1928, but the medium’s potential to sell goods was already predicted.

    The effects of television advertising upon the viewing public (and the effects of mass media in general) have been the subject of discourse by philosophers, including Marshall McLuhan. The viewership of television programming, as measured by companies such as Nielsen Media Research, is often used as a metric for television advertisement placement and, consequently, for the rates charged to advertisers to air within a given network, television program, or time of day (called a “daypart”). In many countries, including the United States, television campaign advertisements is considered indispensable for a political campaign. In other countries, such as France, political advertising on television is heavily restricted,[219] while some countries, such as Norway, completely ban political advertisements.

    The first official, paid television advertisement was broadcast in the United States on 1 July 1941, over New York station WNBT (now WNBC) before a baseball game between the Brooklyn Dodgers and Philadelphia Phillies. The announcement for Bulova watches, for which the company paid anywhere from $4.00 to $9.00 (reports vary), displayed a WNBT test pattern modified to look like a clock with the hands showing the time. The Bulova logo, with the phrase “Bulova Watch Time,” was shown in the lower right-hand quadrant of the test pattern while the second hand swept around the dial for one minute.[220][221] The first TV ad broadcast in the U.K. was on ITV on 22 September 1955, advertising Gibbs SR toothpaste. The first TV ad broadcast in Asia was on Nippon Television in Tokyo on 28 August 1953, advertising Seikosha (now Seiko), which also displayed a clock with the current time.[222]

    United States

    Since inception in the US in 1941,[223] television commercials have become one of the most effective, persuasive, and popular methods of selling products of many sorts, especially consumer goods. During the 1940s and into the 1950s, programs were hosted by single advertisers. This, in turn, gave great creative control to the advertisers over the content of the show. Perhaps due to the quiz show scandals in the 1950s,[224] networks shifted to the magazine concept, introducing advertising breaks with other advertisers.

    U.S. advertising rates are determined primarily by Nielsen ratings. The time of the day and popularity of the channel determine how much a TV commercial can cost. For example, it can cost approximately $750,000 for a 30-second block of commercial time during the highly popular singing competition American Idol, while the same amount of time for the Super Bowl can cost several million dollars. Conversely, lesser-viewed time slots, such as early mornings and weekday afternoons, are often sold in bulk to producers of infomercials at far lower rates. In recent years, paid programs or infomercials have become common, usually in lengths of 30 minutes or one hour. Some drug companies and other businesses have even created “news” items for broadcast, known in the industry as video news releases, paying program directors to use them.[225]

    Some television programs also deliberately place products into their shows as advertisements, a practice started in feature films[226] and known as product placement. For example, a character could be drinking a certain kind of soda, going to a particular chain restaurant, or driving a certain make of car. (This is sometimes very subtle, with shows having vehicles provided by manufacturers for low cost in exchange as a product placement). Sometimes, a specific brand or trade mark, or music from a certain artist or group, is used. (This excludes guest appearances by artists who perform on the show.)

    United Kingdom

    The TV regulator oversees TV advertising in the United Kingdom. Its restrictions have applied since the early days of commercially funded TV. Despite this, an early TV mogul, Roy Thomson, likened the broadcasting license as being a “license to print money”.[227] Restrictions mean that the big three national commercial TV channels: ITVChannel 4, and Channel 5 can show an average of only seven minutes of advertising per hour (eight minutes in the peak period). Other broadcasters must average no more than nine minutes (twelve in the peak). This means that many imported TV shows from the U.S. have unnatural pauses where the British company does not use the narrative breaks intended for more frequent U.S. advertising. Advertisements must not be inserted in the course of certain specific proscribed types of programs that last less than half an hour in scheduled duration; this list includes any news or current affairs programs, documentaries, and programs for children; additionally, advertisements may not be carried in a program designed and broadcast for reception in schools or in any religious broadcasting service or other devotional program or during a formal Royal ceremony or occasion. There also must be clear demarcations in time between the programs and the advertisements. The BBC, being strictly non-commercial, is not allowed to show adverts on television in the U.K., though it has advertising-funded channels abroad. The majority of its budget comes from television license fees (see below) and broadcast syndication, the sale of content to other broadcasters.[228][229]

    Ireland

    Broadcast advertising is regulated by the Broadcasting Authority of Ireland.[230]

    Subscription

    Some TV channels are partly funded from subscriptions; therefore, the signals are encrypted during the broadcast to ensure that only the paying subscribers have access to the decryption codes to watch pay television or specialty channels. Most subscription services are also funded by advertising.

    Taxation or license

    Television services in some countries may be funded by a television licence or a form of taxation, which means that advertising plays a lesser role or no role at all. For example, some channels may carry no advertising at all and some very little, including:

    The British Broadcasting Corporation‘s TV service carries no television advertising on its UK channels and is funded by an annual television license paid by the occupiers of premises receiving live telecasts. As of 2012 it was estimated that approximately 26.8 million UK private domestic households owned televisions, with approximately 25 million TV licences in all premises in force as of 2010.[231] This television license fee is set by the government, but the BBC is not answerable to or controlled by the government.[citation needed] As of 2009 two main BBC TV channels were watched by almost 90% of the population each week and overall had 27% share of total viewing,[232] despite the fact that 85% of homes were multi-channel, with 42% of these having access to 200 free-to-air channels via satellite and another 43% having access to 30 or more channels via Freeview.[233] As of June 2021 the licence that funds the advertising-free BBC TV channels cost £159 for a colour TV Licence and £53.50 for a black and white TV Licence (free or reduced for some groups).[234]

    The Australian Broadcasting Corporation‘s television services in Australia carry no advertising by external sources; it is banned under the Australian Broadcasting Corporation Act 1983, which also ensures its editorial independence. The ABC receives most of its funding from the Australian Government (some revenue is received from its Commercial division), but it has suffered progressive funding cuts under Liberal governments since the 1996 Howard government,[235] with particularly deep cuts in 2014 under the Turnbull government,[236] and an ongoing indexation freeze as of 2021.[237][238] The funds provide for the ABC’s televisionradioonline, and international outputs, although ABC Australia, which broadcasts throughout the Asia-Pacific region, receives additional funds through DFAT and some advertising on the channel.[239][240]

    In France, government-funded channels carry advertisements, yet those who own television sets have to pay an annual tax (“la redevance audiovisuelle”).[241]

    In Japan, NHK is paid for by license fees (known in Japanese as reception fee (受信料, Jushinryō)). The broadcast law that governs NHK’s funding stipulates that any television equipped to receive NHK is required to pay. The fee is standardized, with discounts for office workers and students who commute, as well as a general discount for residents of Okinawa prefecture.

    Broadcast programming

    Main article: Broadcast programming

    See also: TV listings (UK)

    Broadcast programming, or TV listings in the United Kingdom, is the practice of organizing television programs in a schedule, with broadcast automation used to regularly change the scheduling of TV programs to build an audience for a new show, retain that audience, or compete with other broadcasters’ programs.

    Social aspects

    Main articles: Social aspects of television and Television consumption

    American family watching television, c. 1958

    Television has played a pivotal role in the socialization of the 20th and 21st centuries. There are many aspects of television that can be addressed, including negative issues such as media violence. Current research is discovering that individuals suffering from social isolation can employ television to create what is termed a parasocial or faux relationship with characters from their favorite television shows and movies as a way of deflecting feelings of loneliness and social deprivation.[242] Several studies have found that educational television has many advantages. The article “The Good Things about Television”[243] argues that television can be a very powerful and effective learning tool for children if used wisely. With respect to faith, many Christian denominations use television for religious broadcasting.

    Religious opposition

    Methodist denominations in the conservative holiness movement, such as the Allegheny Wesleyan Methodist Connection and the Evangelical Wesleyan Church, eschew the use of the television.[244] Some Baptists, such as those affiliated with Pensacola Christian College,[245] also eschew television. Many Traditional Catholic congregations such as the Society of Saint Pius X (SSPX), as with Laestadian Lutherans, and Conservative Anabaptists such as the Dunkard Brethren Church, oppose the presence of television in the household, teaching that it is an occasion of sin.[246][247][248][249]

    Negative impacts

    Children, especially those aged five or younger, are at risk of injury from falling televisions.[250] A CRT-style television that falls on a child will, because of its weight, hit with the equivalent force of falling multiple stories from a building.[251] Newer flat-screen televisions are “top-heavy and have narrow bases”, which means that a small child can easily pull one over.[252] As of 2015, TV tip-overs were responsible for more than 10,000 injuries per year to children in the United States, at a cost of more than US$8 million per year (equivalent to US$10.61 million per year in 2024) in emergency care.[250][252]

    A 2017 study in The Journal of Human Resources found that exposure to cable television reduced cognitive ability and high school graduation rates for boys. This effect was stronger for boys from more educated families. The article suggests a mechanism where light television entertainment crowds out more cognitively stimulating activities.[253]

    With high lead content in CRTs and the rapid diffusion of new flat-panel display technologies, some of which (LCDs) use lamps which contain mercury, there is growing concern about electronic waste from discarded televisions. Related occupational health concerns exist, as well, for disassemblers removing copper wiring and other materials from CRTs. Further environmental concerns related to television design and use relate to the devices’ increasing electrical energy requirements.

  • Microwave 

    Microwave is a form of electromagnetic radiation with wavelengths shorter than other radio waves but longer than infrared waves. Its wavelength ranges from about one meter to one millimeter, corresponding to frequencies between 300 MHz and 300 GHz, broadly construed.[1][2]: 3 [3][4][5][6][excessive citations] A more common definition in radio-frequency engineering is the range between 1 and 100 GHz (wavelengths between 30 cm and 3 mm),[2]: 3  or between 1 and 3000 GHz (30 cm and 0.1 mm).[7][8] The prefix micro- in microwave is not meant to suggest a wavelength in the micrometer range; rather, it indicates that microwaves are small (having shorter wavelengths), compared to the radio waves used in prior radio technology.

    The boundaries between far infraredterahertz radiation, microwaves, and ultra-high-frequency (UHF) are fairly arbitrary and are used variously between different fields of study. In all cases, microwaves include the entire super high frequency (SHF) band (3 to 30 GHz, or 10 to 1 cm) at minimum. A broader definition includes UHF and extremely high frequency (EHF) (millimeter wave; 30 to 300 GHz) bands as well.

    Frequencies in the microwave range are often referred to by their IEEE radar band designations: SCXKuK, or Ka band, or by similar NATO or EU designations.

    Microwaves travel by line-of-sight; unlike lower frequency radio waves, they do not diffract around hills, follow the earth’s surface as ground waves, or reflect from the ionosphere, so terrestrial microwave communication links are limited by the visual horizon to about 40 miles (64 km). At the high end of the band, they are absorbed by gases in the atmosphere, limiting practical communication distances to around a kilometer.

    Microwaves are widely used in modern technology, for example in point-to-point communication links, wireless networksmicrowave radio relay networks, radarsatellite and spacecraft communication, medical diathermy and cancer treatment, remote sensingradio astronomyparticle acceleratorsspectroscopy, industrial heating, collision avoidance systemsgarage door openers and keyless entry systems, and for cooking food in microwave ovens.

    Electromagnetic spectrum

    Microwaves occupy a place in the electromagnetic spectrum with frequency above ordinary radio waves, and below infrared light:

    Electromagnetic spectrum
    NameWavelengthFrequency (Hz)Photon energy (eV)
    Gamma ray< 0.01 nm> 30 EHz> 124 keV
    X-ray0.01 nm – 10 nm30 EHz – 30 PHz124 keV – 124 eV
    Ultraviolet10 nm – 400 nm30 PHz – 750 THz124 eV – 3 eV
    Visible light400 nm – 750 nm750 THz – 400 THz3 eV – 1.7 eV
    Infrared750 nm – 1 mm400 THz – 300 GHz1.7 eV – 1.24 meV
    Microwave1 mm – 1 m300 GHz – 300 MHz1.24 meV – 1.24 μeV
    Radio≥ 1 m≤ 300 MHz≤ 1.24 μeV

    In descriptions of the electromagnetic spectrum, some sources classify microwaves as radio waves, a subset of the radio wave band, while others classify microwaves and radio waves as distinct types of radiation.[1] This is an arbitrary distinction.

    Frequency bands

    Bands of frequencies in the microwave spectrum are designated by letters. Unfortunately, there are several incompatible band designation systems, and even within a system the frequency ranges corresponding to some of the letters vary somewhat between different application fields.[9][10] The letter system had its origin in World War 2 in a top-secret U.S. classification of bands used in radar sets; this is the origin of the oldest letter system, the IEEE radar bands. One set of microwave frequency bands designations by the Radio Society of Great Britain (RSGB), is tabulated below:

    Radio bands
    ITU
    1 (ELF)2 (SLF)3 (ULF)4 (VLF)5 (LF)6 (MF)7 (HF)8 (VHF)9 (UHF)10 (SHF)11 (EHF)12 (THF)
    EU / NATO / US ECM
    ABCDEFGHIJKLMN
    IEEE
    HFVHFUHFLSCXKuKKaVWmm
    Other TV and radio
    IIIIIIIVVVI
    vte
    DesignationFrequency rangeWavelength rangeTypical uses
    L band1 to 2 GHz15 cm to 30 cmmilitary telemetry, GPS, mobile phones (GSM), amateur radio
    S band2 to 4 GHz7.5 cm to 15 cmweather radar, surface ship radar, some communications satellites, microwave ovens, microwave devices/communications, radio astronomy, mobile phones, wireless LAN, Bluetooth, ZigBee, GPS, amateur radio
    C band4 to 8 GHz3.75 cm to 7.5 cmlong-distance radio telecommunications, wireless LAN, amateur radio
    X band8 to 12 GHz25 mm to 37.5 mmsatellite communications, radar, terrestrial broadband, space communications, amateur radio, molecular rotational spectroscopy
    Ku band12 to 18 GHz16.7 mm to 25 mmsatellite communications, molecular rotational spectroscopy
    K band18 to 26.5 GHz11.3 mm to 16.7 mmradar, satellite communications, astronomical observations, automotive radar, molecular rotational spectroscopy
    Ka band26.5 to 40 GHz5.0 mm to 11.3 mmsatellite communications, molecular rotational spectroscopy
    Q band33 to 50 GHz6.0 mm to 9.0 mmsatellite communications, terrestrial microwave communications, radio astronomy, automotive radar, molecular rotational spectroscopy
    U band40 to 60 GHz5.0 mm to 7.5 mm
    V band50 to 75 GHz4.0 mm to 6.0 mmmillimeter wave radar research, molecular rotational spectroscopy and other kinds of scientific research
    W band75 to 110 GHz2.7 mm to 4.0 mmsatellite communications, millimeter-wave radar research, military radar targeting and tracking applications, and some non-military applications, automotive radar
    F band90 to 140 GHz2.1 mm to 3.3 mmSHF transmissions: Radio astronomy, microwave devices/communications, wireless LAN, most modern radars, communications satellites, satellite television broadcasting, DBS, amateur radio
    D band110 to 170 GHz1.8 mm to 2.7 mmEHF transmissions: Radio astronomy, high-frequency microwave radio relay, microwave remote sensing, amateur radio, directed-energy weapon, millimeter wave scanner

    Other definitions exist.[11]

    The term P band is sometimes used for UHF frequencies below the L band but is now obsolete per IEEE Std 521.

    When radars were first developed at K band during World War 2, it was not known that there was a nearby absorption band (due to water vapor and oxygen in the atmosphere). To avoid this problem, the original K band was split into a lower band, Ku, and upper band, Ka.[12]

    Propagation

    Main article: Radio propagation

    The atmospheric attenuation of microwaves and far infrared radiation in dry air with a precipitable water vapor level of 0.001 mm. The downward spikes in the graph correspond to frequencies at which microwaves are absorbed more strongly. This graph includes a range of frequencies from 0 to 1 THz; the microwaves are the subset in the range between 0.3 and 300 gigahertz.

    Microwaves travel solely by line-of-sight paths; unlike lower frequency radio waves, they do not travel as ground waves which follow the contour of the Earth, or reflect off the ionosphere (skywaves).[13] Although at the low end of the band, they can pass through building walls enough for useful reception, usually rights of way cleared to the first Fresnel zone are required. Therefore, on the surface of the Earth, microwave communication links are limited by the visual horizon to about 30–40 miles (48–64 km). Microwaves are absorbed by moisture in the atmosphere, and the attenuation increases with frequency, becoming a significant factor (rain fade) at the high end of the band. Beginning at about 40 GHz, atmospheric gases also begin to absorb microwaves, so above this frequency microwave transmission is limited to a few kilometers. A spectral band structure causes absorption peaks at specific frequencies (see graph at right). Above 100 GHz, the absorption of electromagnetic radiation by Earth’s atmosphere is so effective that it is in effect opaque, until the atmosphere becomes transparent again in the so-called infrared and optical window frequency ranges.

    Troposcatter

    Main article: Tropospheric scatter

    In a microwave beam directed at an angle into the sky, a small amount of the power will be randomly scattered as the beam passes through the troposphere.[13] A sensitive receiver beyond the horizon with a high gain antenna focused on that area of the troposphere can pick up the signal. This technique has been used at frequencies between 0.45 and 5 GHz in tropospheric scatter (troposcatter) communication systems to communicate beyond the horizon, at distances up to 300 km.

    Antennas

    Waveguide is used to carry microwaves. Example of waveguides and a diplexer in an air traffic control radar.

    The short wavelengths of microwaves allow omnidirectional antennas for portable devices to be made very small, from 1 to 20 centimeters long, so microwave frequencies are widely used for wireless devices such as cell phonescordless phones, and wireless LANs (Wi-Fi) access for laptops, and Bluetooth earphones. Antennas used include short whip antennasrubber ducky antennas, sleeve dipolespatch antennas, and increasingly the printed circuit inverted F antenna (PIFA) used in cell phones.

    Their short wavelength also allows narrow beams of microwaves to be produced by conveniently small high gain antennas from a half meter to 5 meters in diameter. Therefore, beams of microwaves are used for point-to-point communication links, and for radar. An advantage of narrow beams is that they do not interfere with nearby equipment using the same frequency, allowing frequency reuse by nearby transmitters. Parabolic (“dish”) antennas are the most widely used directive antennas at microwave frequencies, but horn antennasslot antennas and lens antennas are also used. Flat microstrip antennas are being increasingly used in consumer devices. Another directive antenna practical at microwave frequencies is the phased array, a computer-controlled array of antennas that produces a beam that can be electronically steered in different directions.

    At microwave frequencies, the transmission lines which are used to carry lower frequency radio waves to and from antennas, such as coaxial cable and parallel wire lines, have excessive power losses, so when low attenuation is required, microwaves are carried by metal pipes called waveguides. Due to the high cost and maintenance requirements of waveguide runs, in many microwave antennas the output stage of the transmitter or the RF front end of the receiver is located at the antenna.

    Design and analysis

    The term microwave also has a more technical meaning in electromagnetics and circuit theory.[14][15] Apparatus and techniques may be described qualitatively as “microwave” when the wavelengths of signals are roughly the same as the dimensions of the circuit, so that lumped-element circuit theory is inaccurate, and instead distributed circuit elements and transmission-line theory are more useful methods for design and analysis.

    As a consequence, practical microwave circuits tend to move away from the discrete resistorscapacitors, and inductors used with lower-frequency radio waves. Open-wire and coaxial transmission lines used at lower frequencies are replaced by waveguides and stripline, and lumped-element tuned circuits are replaced by cavity resonators or resonant stubs.[14] In turn, at even higher frequencies, where the wavelength of the electromagnetic waves becomes small in comparison to the size of the structures used to process them, microwave techniques become inadequate, and the methods of optics are used.

    Sources

    Cutaway view inside a cavity magnetron as used in a microwave oven (left). Antenna splitter: microstrip techniques become increasingly necessary at higher frequencies (right).

    Disassembled radar speed gun. The grey assembly attached to the end of the copper-colored horn antenna is the Gunn diode which generates the microwaves.

    High-power microwave sources use specialized vacuum tubes to generate microwaves. These devices operate on different principles from low-frequency vacuum tubes, using the ballistic motion of electrons in a vacuum under the influence of controlling electric or magnetic fields, and include the magnetron (used in microwave ovens), klystrontraveling-wave tube (TWT), and gyrotron. These devices work in the density modulated mode, rather than the current modulated mode. This means that they work on the basis of clumps of electrons flying ballistically through them, rather than using a continuous stream of electrons.

    Low-power microwave sources use solid-state devices such as the field-effect transistor (at least at lower frequencies), tunnel diodesGunn diodes, and IMPATT diodes.[16] Low-power sources are available as benchtop instruments, rackmount instruments, embeddable modules and in card-level formats. A maser is a solid-state device that amplifies microwaves using similar principles to the laser, which amplifies higher-frequency light waves.

    All warm objects emit low level microwave black-body radiation, depending on their temperature, so in meteorology and remote sensingmicrowave radiometers are used to measure the temperature of objects or terrain.[17] The sun[18] and other astronomical radio sources such as Cassiopeia A emit low level microwave radiation which carries information about their makeup, which is studied by radio astronomers using receivers called radio telescopes.[17] The cosmic microwave background radiation (CMBR), for example, is a weak microwave noise filling empty space which is a major source of information on cosmology‘s Big Bang theory of the origin of the Universe.

    Applications

    Microwave technology is extensively used for point-to-point telecommunications (i.e., non-broadcast uses). Microwaves are especially suitable for this use since they are more easily focused into narrower beams than radio waves, allowing frequency reuse; their comparatively higher frequencies allow broad bandwidth and high data transmission rates, and antenna sizes are smaller than at lower frequencies because antenna size is inversely proportional to the transmitted frequency. Microwaves are used in spacecraft communication, and much of the world’s data, TV, and telephone communications are transmitted long distances by microwaves between ground stations and communications satellites. Microwaves are also employed in microwave ovens and in radar technology.

    Communication

    Main articles: Point-to-point (telecommunications)Microwave transmission, and Satellite communications

    satellite dish on a residence, which receives satellite television over a Ku band 12–14 GHz microwave beam from a direct broadcast communications satellite in a geostationary orbit 35,700 kilometres (22,000 miles) above the Earth

    Before the advent of fiber-optic transmission, most long-distance telephone calls were carried via networks of microwave radio relay links run by carriers such as AT&T Long Lines. Starting in the early 1950s, frequency-division multiplexing was used to send up to 5,400 telephone channels on each microwave radio channel, with as many as ten radio channels combined into one antenna for the hop to the next site, up to 70 km away.

    Wireless LAN protocols, such as Bluetooth and the IEEE 802.11 specifications used for Wi-Fi, also use microwaves in the 2.4 GHz ISM band, although 802.11a uses ISM band and U-NII frequencies in the 5 GHz range. Licensed long-range (up to about 25 km) Wireless Internet Access services have been used for almost a decade in many countries in the 3.5–4.0 GHz range. The FCC recently[when?] carved out spectrum for carriers that wish to offer services in this range in the U.S. — with emphasis on 3.65 GHz. Dozens of service providers across the country are securing or have already received licenses from the FCC to operate in this band. The WIMAX service offerings that can be carried on the 3.65 GHz band will give business customers another option for connectivity.

    Metropolitan area network (MAN) protocols, such as WiMAX (Worldwide Interoperability for Microwave Access) are based on standards such as IEEE 802.16, designed to operate between 2 and 11 GHz. Commercial implementations are in the 2.3 GHz, 2.5 GHz, 3.5 GHz and 5.8 GHz ranges.

    Mobile Broadband Wireless Access (MBWA) protocols based on standards specifications such as IEEE 802.20 or ATIS/ANSI HC-SDMA (such as iBurst) operate between 1.6 and 2.3 GHz to give mobility and in-building penetration characteristics similar to mobile phones but with vastly greater spectral efficiency.[19]

    Some mobile phone networks, like GSM, use the low-microwave/high-UHF frequencies around 1.8 and 1.9 GHz in the Americas and elsewhere, respectively. DVB-SH and S-DMB use 1.452 to 1.492 GHz, while proprietary/incompatible satellite radio in the U.S. uses around 2.3 GHz for DARS.

    Microwave radio is used in point-to-point telecommunications transmissions because, due to their short wavelength, highly directional antennas are smaller and therefore more practical than they would be at longer wavelengths (lower frequencies). There is also more bandwidth in the microwave spectrum than in the rest of the radio spectrum; the usable bandwidth below 300 MHz is less than 300 MHz while many GHz can be used above 300 MHz. Typically, microwaves are used in remote broadcasting of news or sports events as the backhaul link to transmit a signal from a remote location to a television station from a specially equipped van. See broadcast auxiliary service (BAS), remote pickup unit (RPU), and studio/transmitter link (STL).

    Most satellite communications systems operate in the C, X, Ka, or Ku bands of the microwave spectrum. These frequencies allow large bandwidth while avoiding the crowded UHF frequencies and staying below the atmospheric absorption of EHF frequencies. Satellite TV either operates in the C band for the traditional large dish fixed satellite service or Ku band for direct-broadcast satellite. Military communications run primarily over X or Ku-band links, with Ka band being used for Milstar.

    Further information: Satellite navigation and Navigation

    Global Navigation Satellite Systems (GNSS) including the Chinese Beidou, the American Global Positioning System (introduced in 1978) and the Russian GLONASS broadcast navigational signals in various bands between about 1.2 GHz and 1.6 GHz.

    Radar

    Main article: Radar

    The parabolic antenna (lower curved surface) of an ASR-9 airport surveillance radar which radiates a narrow vertical fan-shaped beam of 2.7–2.9 GHz (S band) microwaves to locate aircraft in the airspace surrounding an airport

    Radar is a radiolocation technique in which a beam of radio waves emitted by a transmitter bounces off an object and returns to a receiver, allowing the location, range, speed, and other characteristics of the object to be determined. The short wavelength of microwaves causes large reflections from objects the size of motor vehicles, ships and aircraft. Also, at these wavelengths, the high gain antennas such as parabolic antennas which are required to produce the narrow beamwidths needed to accurately locate objects are conveniently small, allowing them to be rapidly turned to scan for objects. Therefore, microwave frequencies are the main frequencies used in radar. Microwave radar is widely used for applications such as air traffic control, weather forecasting, navigation of ships, and speed limit enforcement. Long-distance radars use the lower microwave frequencies since at the upper end of the band atmospheric absorption limits the range, but millimeter waves are used for short-range radar such as collision avoidance systems.

    Some of the dish antennas of the Atacama Large Millimeter Array (ALMA) a radio telescope located in northern Chile. It receives microwaves in the millimeter wave range, 31 – 1000 GHz.

    Maps of the cosmic microwave background radiation (CMBR), showing the improved resolution which has been achieved with better microwave radio telescopes

    Radio astronomy

    Main article: radio astronomy

    Microwaves emitted by astronomical radio sources; planets, stars, galaxies, and nebulas are studied in radio astronomy with large dish antennas called radio telescopes. In addition to receiving naturally occurring microwave radiation, radio telescopes have been used in active radar experiments to bounce microwaves off planets in the solar system, to determine the distance to the Moon or map the invisible surface of Venus through cloud cover.

    A recently[when?] completed microwave radio telescope is the Atacama Large Millimeter Array, located at more than 5,000 meters (16,597 ft) altitude in Chile, which observes the universe in the millimeter and submillimeter wavelength ranges. The world’s largest ground-based astronomy project to date, it consists of more than 66 dishes and was built in an international collaboration by Europe, North America, East Asia and Chile.[20][21]

    A major recent[when?] focus of microwave radio astronomy has been mapping the cosmic microwave background radiation (CMBR) discovered in 1964 by radio astronomers Arno Penzias and Robert Wilson. This faint background radiation, which fills the universe and is almost the same in all directions, is “relic radiation” from the Big Bang, and is one of the few sources of information about conditions in the early universe. Due to the expansion and thus cooling of the Universe, the originally high-energy radiation has been shifted into the microwave region of the radio spectrum. Sufficiently sensitive radio telescopes can detect the CMBR as a faint signal that is not associated with any star, galaxy, or other object.[22]

    Heating and power application

    Small microwave oven on a kitchen counter
    Microwaves are widely used for heating in industrial processes. A microwave tunnel oven for softening plastic rods prior to extrusion.

    microwave oven passes microwave radiation at a frequency near 2.45 GHz (12 cm) through food, causing dielectric heating primarily by absorption of the energy in water. Microwave ovens became common kitchen appliances in Western countries in the late 1970s, following the development of less expensive cavity magnetrons. Water in the liquid state possesses many molecular interactions that broaden the absorption peak. In the vapor phase, isolated water molecules absorb at around 22 GHz, almost ten times the frequency of the microwave oven.

    Microwave heating is used in industrial processes for drying and curing products.

    Many semiconductor processing techniques use microwaves to generate plasma for such purposes as reactive ion etching and plasma-enhanced chemical vapor deposition (PECVD).

    Microwaves are used in stellarators and tokamak experimental fusion reactors to help break down the gas into a plasma and heat it to very high temperatures. The frequency is tuned to the cyclotron resonance of the electrons in the magnetic field, anywhere between 2–200 GHz, hence it is often referred to as Electron Cyclotron Resonance Heating (ECRH). The upcoming ITER thermonuclear reactor[23] will use up to 20 MW of 170 GHz microwaves.

    Microwaves can be used to transmit power over long distances, and post-World War 2 research was done to examine possibilities. NASA worked in the 1970s and early 1980s to research the possibilities of using solar power satellite (SPS) systems with large solar arrays that would beam power down to the Earth’s surface via microwaves.

    Less-than-lethal weaponry exists that uses millimeter waves to heat a thin layer of human skin to an intolerable temperature so as to make the targeted person move away. A two-second burst of the 95 GHz focused beam heats the skin to a temperature of 54 °C (129 °F) at a depth of 0.4 millimetres (164 in). The United States Air Force and Marines are currently using this type of active denial system in fixed installations.[24]

    Spectroscopy

    Microwave radiation is used in electron paramagnetic resonance (EPR or ESR) spectroscopy, typically in the X-band region (~9 GHz) in conjunction typically with magnetic fields of 0.3 T. This technique provides information on unpaired electrons in chemical systems, such as free radicals or transition metal ions such as Cu(II). Microwave radiation is also used to perform rotational spectroscopy and can be combined with electrochemistry as in microwave enhanced electrochemistry.

    Frequency measurement

    Absorption wavemeter for measuring in the Ku band

    Microwave frequency can be measured by either electronic or mechanical techniques.

    Frequency counters or high frequency heterodyne systems can be used. Here the unknown frequency is compared with harmonics of a known lower frequency by use of a low-frequency generator, a harmonic generator and a mixer. The accuracy of the measurement is limited by the accuracy and stability of the reference source.

    Mechanical methods require a tunable resonator such as an absorption wavemeter, which has a known relation between a physical dimension and frequency.

    In a laboratory setting, Lecher lines can be used to directly measure the wavelength on a transmission line made of parallel wires, the frequency can then be calculated. A similar technique is to use a slotted waveguide or slotted coaxial line to directly measure the wavelength. These devices consist of a probe introduced into the line through a longitudinal slot so that the probe is free to travel up and down the line. Slotted lines are primarily intended for measurement of the voltage standing wave ratio on the line. However, provided a standing wave is present, they may also be used to measure the distance between the nodes, which is equal to half the wavelength. The precision of this method is limited by the determination of the nodal locations.

    Effects on health

    Further information: Electromagnetic radiation and health and Microwave burn

    Microwaves are non-ionizing radiation, which means that microwave photons do not contain sufficient energy to ionize molecules or break chemical bonds, or cause DNA damage, as ionizing radiation such as x-rays or ultraviolet can.[25] The word “radiation” refers to energy radiating from a source and not to radioactivity. The main effect of absorption of microwaves is to heat materials; the electromagnetic fields cause polar molecules to vibrate. It has not been shown conclusively that microwaves (or other non-ionizing electromagnetic radiation) have significant adverse biological effects at low levels. Some, but not all, studies suggest that long-term exposure may have a carcinogenic effect.[26]

    During World War II, it was observed that individuals in the radiation path of radar installations experienced clicks and buzzing sounds in response to microwave radiation. Research by NASA in the 1970s has shown this to be caused by thermal expansion in parts of the inner ear. In 1955, Dr. James Lovelock was able to reanimate rats chilled to 0 and 1 °C (32 and 34 °F) using microwave diathermy.[27]

    When injury from exposure to microwaves occurs, it usually results from dielectric heating induced in the body. The lens and cornea of the eye are especially vulnerable because they contain no blood vessels that can carry away heat. Exposure to microwave radiation can produce cataracts by this mechanism, because the microwave heating denatures proteins in the crystalline lens of the eye[28] (in the same way that heat turns egg whites white and opaque). Exposure to heavy doses of microwave radiation (as from an oven that has been tampered with to allow operation even with the door open) can produce heat damage in other tissues as well, up to and including serious burns that may not be immediately evident because of the tendency for microwaves to heat deeper tissues with higher moisture content.

    History

    Hertzian optics

    Microwaves were first generated in the 1890s in some of the earliest radio wave experiments by physicists who thought of them as a form of “invisible light”.[29] James Clerk Maxwell in his 1873 theory of electromagnetism, now called Maxwell’s equations, had predicted that a coupled electric field and magnetic field could travel through space as an electromagnetic wave, and proposed that light consisted of electromagnetic waves of short wavelength. In 1888, German physicist Heinrich Hertz was the first to demonstrate the existence of electromagnetic waves, generating radio waves using a primitive spark gap radio transmitter.[30]

    Hertz and the other early radio researchers were interested in exploring the similarities between radio waves and light waves, to test Maxwell’s theory. They concentrated on producing short wavelength radio waves in the UHF and microwave ranges, with which they could duplicate classic optics experiments in their laboratories, using quasioptical components such as prisms and lenses made of paraffinsulfur and pitch and wire diffraction gratings, to refract and diffract radio waves like light rays.[31] Hertz produced waves up to 450 MHz; his directional 450 MHz transmitter consisted of a 26 cm brass rod dipole antenna with a spark gap between the ends, suspended at the focal line of a parabolic antenna made of a curved zinc sheet, powered by high voltage pulses from an induction coil.[30] His historic experiments demonstrated that radio waves like light exhibited refractiondiffractionpolarizationinterference and standing waves,[31] proving that radio waves and light waves were both forms of Maxwell’s electromagnetic waves.

    • Heinrich Hertz‘s 450 MHz spark transmitter, 1888, consisting of 23 cm dipole and spark gap at the focus of a parabolic reflector
    • Jagadish Chandra Bose in 1894 was the first person to produce millimeter waves; his spark oscillator (in box, right) generated 60 GHz (5 mm) waves using 3 mm metal ball resonators.
    • 3 mm spark ball oscillator Bose used to generate 60 GHz waves
    • Microwave spectroscopy experiment by John Ambrose Fleming in 1897 showing refraction of 1.4 GHz microwaves by paraffin prism, duplicating earlier experiments by Bose and Righi.
    • Augusto Righi‘s 12 GHz spark oscillator and receiver, 1895
    • Oliver Lodge’s 5 inch oscillator ball he used to generate 1.2 GHz microwaves in 1894
    • 1.2 GHz microwave spark transmitter (left) and coherer receiver (right) used by Guglielmo Marconi during his 1895 experiments had a range of 6.5 km (4.0 mi)

    Beginning in 1894 Indian physicist Jagadish Chandra Bose performed the first experiments with microwaves. He was the first person to produce millimeter waves, generating frequencies up to 60 GHz (5 millimeter) using a 3 mm metal ball spark oscillator.[32][31] Bose also invented waveguidehorn antennas, and semiconductor crystal detectors for use in his experiments. Independently in 1894, Oliver Lodge and Augusto Righi experimented with 1.5 and 12 GHz microwaves respectively, generated by small metal ball spark resonators.[31] Russian physicist Pyotr Lebedev in 1895 generated 50 GHz millimeter waves.[31] In 1897 Lord Rayleigh solved the mathematical boundary-value problem of electromagnetic waves propagating through conducting tubes and dielectric rods of arbitrary shape[33][34][35][36] which gave the modes and cutoff frequency of microwaves propagating through a waveguide.[30]

    However, since microwaves were limited to line-of-sight paths, they could not communicate beyond the visual horizon, and the low power of the spark transmitters then in use limited their practical range to a few miles. The subsequent development of radio communication after 1896 employed lower frequencies, which could travel beyond the horizon as ground waves and by reflecting off the ionosphere as skywaves, and microwave frequencies were not further explored at this time.

    First microwave communication experiments

    Practical use of microwave frequencies did not occur until the 1940s and 1950s due to a lack of adequate sources, since the triode vacuum tube (valve) electronic oscillator used in radio transmitters could not produce frequencies above a few hundred megahertz due to excessive electron transit time and interelectrode capacitance.[30] By the 1930s, the first low-power microwave vacuum tubes had been developed using new principles; the Barkhausen–Kurz tube and the split-anode magnetron.[37][30] These could generate a few watts of power at frequencies up to a few gigahertz and were used in the first experiments in communication with microwaves.

    • Antennas of 1931 experimental 1.7 GHz microwave relay link across the English Channel
    • Experimental 3.3 GHz (9 cm) transmitter 1933 at Westinghouse labs [38] transmits voice over a mile.
    • Southworth (at left) demonstrating waveguide at IRE meeting in 1938, showing 1.5 GHz microwaves passing through the 7.5 m flexible metal hose registering on a diode detector
    • The first modern horn antenna in 1938 with inventor Wilmer L. Barrow

    In 1931 an Anglo-French consortium headed by Andre C. Clavier demonstrated the first experimental microwave relay link, across the English Channel 40 miles (64 km) between Dover, UK and Calais, France.[39][40] The system transmitted telephony, telegraph and facsimile data over bidirectional 1.7 GHz beams with a power of one-half watt, produced by miniature Barkhausen–Kurz tubes at the focus of 10-foot (3 m) metal dishes.

    A word was needed to distinguish these new shorter wavelengths, which had previously been lumped into the “short wave” band, which meant all waves shorter than 200 meters. The terms quasi-optical waves and ultrashort waves were used briefly[37][38] but did not catch on. The first usage of the word micro-wave apparently occurred in 1931.[40][41]

    Radar development

    The development of radar, mainly in secrecy, before and during World War II, resulted in the technological advances which made microwaves practical.[30] Microwave wavelengths in the centimeter range were required to give the small radar antennas which were compact enough to fit on aircraft a narrow enough beamwidth to localize enemy aircraft. It was found that conventional transmission lines used to carry radio waves had excessive power losses at microwave frequencies, and George Southworth at Bell Labs and Wilmer Barrow at MIT independently invented waveguide in 1936.[33] Barrow invented the horn antenna in 1938 as a means to efficiently radiate microwaves into or out of a waveguide. In a microwave receiver, a nonlinear component was needed that would act as a detector and mixer at these frequencies, as vacuum tubes had too much capacitance. To fill this need researchers resurrected an obsolete technology, the point contact crystal detector (cat whisker detector) which was used as a demodulator in crystal radios around the turn of the century before vacuum tube receivers.[30][42] The low capacitance of semiconductor junctions allowed them to function at microwave frequencies. The first modern silicon and germanium diodes were developed as microwave detectors in the 1930s, and the principles of semiconductor physics learned during their development led to semiconductor electronics after the war.[30]

    • Randall and Boot‘s prototype cavity magnetron tube at the University of Birmingham, 1940. In use the tube was installed between the poles of an electromagnet
    • First commercial klystron tube, by General Electric, 1940, sectioned to show internal construction
    • British Mk. VIII, the first microwave air intercept radar, in nose of British fighter
    • Mobile US Army microwave relay station 1945 demonstrating relay systems using frequencies from 100 MHz to 4.9 GHz which could transmit up to 8 phone calls on a beam

    The first powerful sources of microwaves were invented at the beginning of World War II: the klystron tube by Russell and Sigurd Varian at Stanford University in 1937, and the cavity magnetron tube by John Randall and Harry Boot at Birmingham University, UK in 1940.[30] Ten centimeter (3 GHz) microwave radar powered by the magnetron tube was in use on British warplanes in late 1941 and proved to be a game changer. Britain’s 1940 decision to share its microwave technology with its US ally (the Tizard Mission) significantly shortened the war. The MIT Radiation Laboratory established secretly at Massachusetts Institute of Technology in 1940 to research radar, produced much of the theoretical knowledge necessary to use microwaves. The first microwave relay systems were developed by the Allied military near the end of the war and used for secure battlefield communication networks in the European theater.

    Post World War II exploitation

    After World War II, microwaves were rapidly exploited commercially.[30] Due to their high frequency they had a very large information-carrying capacity (bandwidth); a single microwave beam could carry tens of thousands of phone calls. In the 1950s and 60s transcontinental microwave relay networks were built in the US and Europe to exchange telephone calls between cities and distribute television programs. In the new television broadcasting industry, from the 1940s microwave dishes were used to transmit backhaul video feeds from mobile production trucks back to the studio, allowing the first remote TV broadcasts. The first communications satellites were launched in the 1960s, which relayed telephone calls and television between widely separated points on Earth using microwave beams. In 1964, Arno Penzias and Robert Woodrow Wilson while investigating noise in a satellite horn antenna at Bell Labs, Holmdel, New Jersey discovered cosmic microwave background radiation.

    C-band horn antennas at a telephone switching center in Seattle, belonging to AT&T’s Long Lines microwave relay network built in the 1960s.

    Microwave lens antenna used in the radar for the 1954 Nike Ajax anti-aircraft missile

    The first commercial microwave oven, Amana’s Radarange, installed in the kitchen of US merchant ship NS Savannah in 1961

    Telstar 1 communications satellite launched July 10, 1962 the first satellite to relay television signals. The ring of microwave cavity antennas received the 6.39 GHz uplink, and transmitted the 4.17 GHz downlink signal.

    Microwave radar became the central technology used in air traffic control, maritime navigationanti-aircraft defenseballistic missile detection, and later many other uses. Radar and satellite communication motivated the development of modern microwave antennas; the parabolic antenna (the most common type), cassegrain antennalens antennaslot antenna, and phased array.

    The ability of short waves to quickly heat materials and cook food had been investigated in the 1930s by Ilia E. Mouromtseff at Westinghouse, and at the 1933 Chicago World’s Fair demonstrated cooking meals with a 60 MHz radio transmitter.[43] In 1945 Percy Spencer, an engineer working on radar at Raytheon, noticed that microwave radiation from a magnetron oscillator melted a candy bar in his pocket. He investigated cooking with microwaves and invented the microwave oven, consisting of a magnetron feeding microwaves into a closed metal cavity containing food, which was patented by Raytheon on 8 October 1945. Due to their expense microwave ovens were initially used in institutional kitchens, but by 1986 roughly 25% of households in the U.S. owned one. Microwave heating became widely used as an industrial process in industries such as plastics fabrication, and as a medical therapy to kill cancer cells in microwave hyperthermy.

    The traveling wave tube (TWT) developed in 1943 by Rudolph Kompfner and John Pierce provided a high-power tunable source of microwaves up to 50 GHz and became the most widely used microwave tube (besides the ubiquitous magnetron used in microwave ovens). The gyrotron tube family developed in Russia could produce megawatts of power up into millimeter wave frequencies and is used in industrial heating and plasma research, and to power particle accelerators and nuclear fusion reactors.

    Solid state microwave devices

    First caesium atomic clock, and inventor Louis Essen (left), 1955

    Experimental ruby maser (lower end of rod), 1961

    Microwave oscillator consisting of a Gunn diode inside a cavity resonator, 1970s

    Modern radar speed gun. At the right end of the copper horn antenna is the Gunn diode (grey assembly) which generates the microwaves.

    The development of semiconductor electronics in the 1950s led to the first solid state microwave devices which worked by a new principle; negative resistance (some of the prewar microwave tubes had also used negative resistance).[30] The feedback oscillator and two-port amplifiers which were used at lower frequencies became unstable at microwave frequencies, and negative resistance oscillators and amplifiers based on one-port devices like diodes worked better.

    The tunnel diode invented in 1957 by Japanese physicist Leo Esaki could produce a few milliwatts of microwave power. Its invention set off a search for better negative resistance semiconductor devices for use as microwave oscillators, resulting in the invention of the IMPATT diode in 1956 by W.T. Read and Ralph L. Johnston and the Gunn diode in 1962 by J. B. Gunn.[30] Diodes are the most widely used microwave sources today.

    Two low-noise solid state negative resistance microwave amplifiers were developed; the maser invented in 1953 by Charles H. TownesJames P. Gordon, and H. J. Zeiger, and the varactor parametric amplifier developed in 1956 by Marion Hines.[30] The parametric amplifier and the ruby maser, invented in 1958 by a team at Bell Labs headed by H.E.D. Scovil were used for low noise microwave receivers in radio telescopes and satellite ground stations. The maser led to the development of atomic clocks, which keep time using a precise microwave frequency emitted by atoms undergoing an electron transition between two energy levels. Negative resistance amplifier circuits required the invention of new nonreciprocal waveguide components, such as circulatorsisolators, and directional couplers. In 1969 Kaneyuki Kurokawa derived mathematical conditions for stability in negative resistance circuits which formed the basis of microwave oscillator design.[44]

    Microwave integrated circuits

    ku band microstrip circuit used in satellite television dish

    Prior to the 1970s microwave devices and circuits were bulky and expensive, so microwave frequencies were generally limited to the output stage of transmitters and the RF front end of receivers, and signals were heterodyned to a lower intermediate frequency for processing. The period from the 1970s to the present has seen the development of tiny inexpensive active solid-state microwave components which can be mounted on circuit boards, allowing circuits to perform significant signal processing at microwave frequencies. This has made possible satellite televisioncable televisionGPS devices, and modern wireless devices, such as smartphonesWi-Fi, and Bluetooth which connect to networks using microwaves.

    Microstrip, a type of transmission line usable at microwave frequencies, was invented with printed circuits in the 1950s.[30] The ability to cheaply fabricate a wide range of shapes on printed circuit boards allowed microstrip versions of capacitorsinductorsresonant stubssplittersdirectional couplersdiplexersfilters and antennas to be made, thus allowing compact microwave circuits to be constructed.[30]

    Transistors that operated at microwave frequencies were developed in the 1970s. The semiconductor gallium arsenide (GaAs) has a much higher electron mobility than silicon,[30] so devices fabricated with this material can operate at 4 times the frequency of similar devices of silicon. Beginning in the 1970s GaAs was used to make the first microwave transistors,[30] and it has dominated microwave semiconductors ever since. MESFETs (metal-semiconductor field-effect transistors), fast GaAs field effect transistors using Schottky junctions for the gate, were developed starting in 1968 and have reached cutoff frequencies of 100 GHz, and are now the most widely used active microwave devices.[30] Another family of transistors with a higher frequency limit is the HEMT (high electron mobility transistor), a field effect transistor made with two different semiconductors, AlGaAs and GaAs, using heterojunction technology, and the similar HBT (heterojunction bipolar transistor).[30]

    GaAs can be made semi-insulating, allowing it to be used as a substrate on which circuits containing passive components, as well as transistors, can be fabricated by lithography.[30] By 1976 this led to the first integrated circuits (ICs) which functioned at microwave frequencies, called monolithic microwave integrated circuits (MMIC).[30] The word “monolithic” was added to distinguish these from microstrip PCB circuits, which were called “microwave integrated circuits” (MIC). Since then, silicon MMICs have also been developed. Today MMICs have become the workhorses of both analog and digital high-frequency electronics, enabling the production of single-chip microwave receivers, broadband amplifiersmodems, and microprocessors.