The 47 Ronin, Warriors of Ako, committed seppuku on this day, March 20.
Tag: history
The Last Samurai
While I am a big fan of the film, the last samurai, while watching it, there were a lot of parallels between history and real life, I was curious about the link and dug a little further. The Meji restoration in Japan was an interesting historical point in Japan’s history. The story of the last samurai also draws on those parallels.
I found the story of Jules Brunet. https://en.wikipedia.org/wiki/Jules_Brunet
Jules Brunet was sent to Japan to train their military in Western tactics before fighting for the samurai against Meiji Imperialists during the Boshin War.
Not many people know the true story of The Last Samurai, the sweeping Tom Cruise epic of 2003. His character, the noble Captain Algren, was actually primarily based on a real person: the French officer Jules Brunet.
Brunet was sent to Japan to train soldiers on how to use modern weapons and tactics. He later chose to stay and fight alongside the Tokugawa samurai in their resistance against Emperor Meiji and his move to modernize Japan.
But how much of this reality is represented in the blockbuster?
The True Story Of The The Last Samurai: The Boshin War
Japan of the 19th century was an isolated nation. Contact with foreigners was largely suppressed. But everything changed in 1853 when American naval commander Matthew Perry appeared in Tokyo’s harbor with a fleet of modern ships.
For the first time ever, Japan was forced to open itself up to the outside world. The Japanese then signed a treaty with the U.S. the following year, the Kanagawa Treaty, which allowed American vessels to dock in two Japanese harbors. America also established a consul in Shimoda.
The event was a shock to Japan and consequently split its nation on whether it should modernize with the rest of the world or remain traditional. Thus followed the Boshin War of 1868-1869, also known as the Japanese Revolution, which was the bloody result of this split.
On one side was Japan’s Meiji Emperor, backed by powerful figures who sought to Westernize Japan and revive the emperor’s power. On the opposing side was the Tokugawa Shogunate, a continuation of the military dictatorship comprised of elite samurai which had ruled Japan since 1192.
Although the Tokugawa shogun, or leader, Yoshinobu, agreed to return power to the emperor, the peaceful transition turned violent when the Emperor was convinced to issue a decree that dissolved the Tokugawa house instead.
The Tokugawa shogun protested which naturally resulted in war. As it happens, 30-year-old French military veteran Jules Brunet was already in Japan when war broke out.
Jules Brunet’s Role In The True Story Of The Last Samurai
Born on January 2, 1838, in Belfort, France, Jules Brunet followed a military career specializing in artillery. He first saw combat during the French intervention in Mexico from 1862 to 1864 where he was awarded the Légion d’honneur — the highest French military honor.
Then, in 1867, Japan’s Tokugawa Shogunate requested help from Napoleon III’s Second French Empire in modernizing their armies. Brunet was sent as the artillery expert alongside a team of other French military advisors.
The group was to train the shogunate’s new troops on how to use modern weapons and tactics. Unfortunately for them, a civil war would break out just a year later between the shogunate and the imperial government.
On January 27, 1868, Brunet and Captain André Cazeneuve — another French military advisor in Japan — accompanied the shogun and his troops on a march to Japan’s capital city of Kyoto.
However, the army was not allowed to pass and troops of the Satsuma and Choshu feudal lords — who were the influence behind the Emperor’s decree — were ordered to fire.
Thus began the first conflict of the Boshin War known as The Battle of Toba-Fushimi. Although the shogun’s forces had 15,000 men to the Satsuma-Choshu’s 5,000, they had one critical flaw: equipment.
While most of the imperial forces were armed with modern weapons such as rifles, howitzers, and Gatling guns, many of the shogunate’s soldiers were still armed with outdated weapons such as swords and pikes, as was the samurai custom.
The battle lasted for four days, but was a decisive victory for the imperial troops, leading many Japanese feudal lords to switch sides from the shogun to the emperor. Brunet and the Shogunate’s Admiral Enomoto Takeaki fled north to the capital city of Edo (modern-day Tokyo) on the warship Fujisan.
Living With The Samurai
Around this time, foreign nations — including France — vowed neutrality in the conflict. Meanwhile, the restored Meiji Emperor ordered the French advisor mission to return home, since they had been training the troops of his enemy — the Tokugawa Shogunate.
While most of his peers agreed, Brunet refused. He chose to stay and fight alongside the Tokugawa. The only glimpse into Brunet’s decision comes from a letter he wrote directly to French Emperor Napoleon III. Aware that his actions would be seen as either insane or treasonous, he explained that:
“A revolution is forcing the Military Mission to return to France. Alone I stay, alone I wish to continue, under new conditions: the results obtained by the Mission, together with the Party of the North, which is the party favorable to France in Japan. Soon a reaction will take place, and the Daimyos of the North have offered me to be its soul. I have accepted, because with the help of one thousand Japanese officers and non-commissioned officers, our students, I can direct the 50,000 men of the confederation.
The Fall Of The Samurai
In Edo, the imperial forces were victorious again largely in part to Tokugawa Shogun Yoshinobu’s decision to submit to the Emperor. He surrendered the city and only small bands of shogunate forces continued to fight back.
Despite this, the commander of the shogunate’s navy, Enomoto Takeaki, refused to surrender and headed north in hopes to rally the Aizu clan’s samurai.
They became the core of the so-called Northern Coalition of feudal lords who joined the remaining Tokugawa leaders in their refusal to submit to the Emperor.
The Coalition continued to fight bravely against imperial forces in Northern Japan. Unfortunately, they simply didn’t have enough modern weaponry to stand a chance against the Emperor’s modernized troops. They were defeated by November 1868.
Around this time, Brunet and Enomoto fled north to the island of Hokkaido. Here, the remaining Tokugawa leaders established the Ezo Republic that continued their struggle against the Japanese imperial state.
By this point, it seemed as though Brunet had chosen the losing side, but surrender was not an option.
The last major battle of the Boshin War happened at the Hokkaido port city of Hakodate. In this battle that spanned half a year from December 1868 to June 1869, 7,000 Imperial troops battled against 3,000 Tokugawa rebels.
Jules Brunet and his men did their best, but the odds were not in their favor, largely due to the technological superiority of the imperial forces.
Jules Brunet Escapes Japan
As a high-profile combatant of the losing side, Brunet was now a wanted man in Japan.
Fortunately, the French warship Coëtlogon evacuated him from Hokkaido just in time. He was then ferried to Saigon — at the time controlled by the French — and returned back to France.
Although the Japanese government demanded Brunet receive punishment for his support of the shogunate in the war, the French government did not budge because his story won the public’s support.
Instead, he was reinstated to the French Army after six months and participated in the Franco-Prussian War of 1870-1871, during which he was taken prisoner during the Siege of Metz.
Later on, he continued to play a major role in the French military, participating in the suppression of the Paris Commune in 1871.
Meanwhile, his former friend Enomoto Takeaki was pardoned and rose to the rank of vice-admiral in the Imperial Japanese Navy, using his influence to get the Japanese government to not only forgive Brunet but award him a number of medals, including the prestigious Order of the Rising Sun.
Over the next 17 years, Jules Brunet himself was promoted several times. From officer to general, to Chief of Staff, he had a thoroughly successful military career until his death in 1911. But he would be most remembered as one of the key inspirations for the 2003 film The Last Samurai.
In this film, Tom Cruise plays American Army officer Nathan Algren, who arrives in Japan to help train Meiji government troops in modern weaponry but becomes embroiled in a war between the samurai and the Emperor’s modern forces.
There are many parallels between the story of Algren and Brunet.
Both were Western military officers who trained Japanese troops in the use of modern weapons and ended up supporting a rebellious group of samurai who still used mainly traditional weapons and tactics. Both also ended up being on the losing side.
But there are many differences as well. Unlike Brunet, Algren was training the imperial government troops and joins the samurai only after he becomes their hostage.
Further, in the film, the samurai are sorely overmatched against the Imperials in regards to equipment. In the true story of The Last Samurai, however, the samurai rebels did actually have some western garb and weaponry thanks to the Westerners like Brunet who had been paid to train them.
Meanwhile, the storyline in the film is based on a slightly later period in 1877 once the emperor was restored in Japan following the fall of the shogunate. This period was called the Meiji Restoration and it was the same year as the last major samurai rebellion against Japan’s imperial government.
This rebellion was organized by the samurai leader Saigo Takamori, who served as the inspiration for The Last Samurai‘s Katsumoto, played by Ken Watanabe. In the true story of The Last Samurai, Watanabe’s character who resembles Takamori leads a great and final samurai rebellion called the final battle of Shiroyama. In the film, Watanabe’s character Katsumoto falls and in reality, so did Takamori.
This battle, however, came in 1877, years after Brunet had already left Japan.
More importantly, the film paints the samurai rebels as the righteous and honorable keepers of an ancient tradition, while the Emperor’s supporters are shown as evil capitalists who only care about money.
As we know in reality, the real story of Japan’s struggle between modernity and tradition was far less black and white, with injustices and mistakes on both sides.
The Real Motivations Of The Samurai
According to history professor Cathy Schultz, “Many samurai fought Meiji modernization not for altruistic reasons but because it challenged their status as the privileged warrior caste…The film also misses the historical reality that many Meiji policy advisors were former samurai, who had voluntarily given up their traditional privileges to follow a course they believed would strengthen Japan.”
You can read more here: https://allthatsinteresting.com/last-samurai-true-story-jules-brunet
Getting Inked? The history of Tattoos
In 1961, it officially became illegal to give someone a tattoo in New York City. But Thom deVita didn’t let this new restriction deter him from inking people. This ban existed till 1997.
What is the earliest evidence of tattoos?
In terms of tattoos on actual bodies, the earliest known examples were for a long time Egyptian and were present on several female mummies dated to c. 2000 B.C. But following the more recent discovery of the Iceman from the area of the Italian-Austrian border in 1991 and his tattoo patterns, this date has been pushed back a further thousand years when he was carbon-dated at around 5,200 years old.
Can you describe the tattoos on the Iceman and their significance?
Following discussions with my colleague Professor Don Brothwell of the University of York, one of the specialists who examined him, the distribution of the tattooed dots and small crosses on his lower spine and right knee and ankle joints correspond to areas of strain-induced degeneration, with the suggestion that they may have been applied to alleviate joint pain and were therefore essentially therapeutic. This would also explain their somewhat ‘random’ distribution in areas of the body which would not have been that easy to display had they been applied as a form of status marker.
What is the evidence that ancient Egyptians had tattoos?
There’s certainly evidence that women had tattoos on their bodies and limbs from figurines c. 4000-3500 B.C. to occasional female figures represented in tomb scenes c. 1200 B.C. and in figurine form c. 1300 B.C., all with tattoos on their thighs. Also small bronze implements identified as tattooing tools were discovered at the town site of Gurob in northern Egypt and dated to c. 1450 B.C. And then, of course, there are the mummies with tattoos, from the three women already mentioned and dated to c. 2000 B.C. to several later examples of female mummies with these forms of permanent marks found in Greco-Roman burials at Akhmim.
What function did these tattoos serve? Who got them and why?
Because this seemed to be an exclusively female practice in ancient Egypt, mummies found with tattoos were usually dismissed by the (male) excavators who seemed to assume the women were of “dubious status,” described in some cases as “dancing girls.” The female mummies had nevertheless been buried at Deir el-Bahari (opposite modern Luxor) in an area associated with royal and elite burials, and we know that at least one of the women described as “probably a royal concubine” was actually a high-status priestess named Amunet, as revealed by her funerary inscriptions.
And although it has long been assumed that such tattoos were the mark of prostitutes or were meant to protect the women against sexually transmitted diseases, I personally believe that the tattooing of ancient Egyptian women had a therapeutic role and functioned as a permanent form of amulet during the very difficult time of pregnancy and birth. This is supported by the pattern of distribution, largely around the abdomen, on top of the thighs and the breasts, and would also explain the specific types of designs, in particular the net-like distribution of dots applied over the abdomen. During pregnancy, this specific pattern would expand in a protective fashion in the same way bead nets were placed over wrapped mummies to protect them and “keep everything in.” The placing of small figures of the household deity Bes at the tops of their thighs would again suggest the use of tattoos as a means of safeguarding the actual birth, since Bes was the protector of women in labor, and his position at the tops of the thighs a suitable location. This would ultimately explain tattoos as a purely female custom.
Who made the tattoos?
Although we have no explicit written evidence in the case of ancient Egypt, it may well be that the older women of a community would create the tattoos for the younger women, as happened in 19th-century Egypt and happens in some parts of the world today.
What instruments did they use?
It is possible that an implement best described as a sharp point set in a wooden handle, dated to c. 3000 B.C. and discovered by archaeologist W.M.F. Petrie at the site of Abydos may have been used to create tattoos. Petrie also found the aforementioned set of small bronze instruments c. 1450 B.C.—resembling wide, flattened needles—at the ancient town site of Gurob. If tied together in a bunch, they would provide repeated patterns of multiple dots.
These instruments are also remarkably similar to much later tattooing implements used in 19th-century Egypt. The English writer William Lane (1801-1876) observed, “the operation is performed with several needles (generally seven) tied together: with these the skin is pricked in a desired pattern: some smoke black (of wood or oil), mixed with milk from the breast of a woman, is then rubbed in…. It is generally performed at the age of about 5 or 6 years, and by gipsy-women.”
What did these tattoos look like?
Most examples on mummies are largely dotted patterns of lines and diamond patterns, while figurines sometimes feature more naturalistic images. The tattoos occasionally found in tomb scenes and on small female figurines which form part of cosmetic items also have small figures of the dwarf god Bes on the thigh area.
What were they made of? How many colors were used?
Usually a dark or black pigment such as soot was introduced into the pricked skin. It seems that brighter colors were largely used in other ancient cultures, such as the Inuit who are believed to have used a yellow color along with the more usual darker pigments.
What has surprised you the most about ancient Egyptian tattooing?
That it appears to have been restricted to women during the purely dynastic period, i.e. pre-332 B.C. Also the way in which some of the designs can be seen to be very well placed, once it is accepted they were used as a means of safeguarding women during pregnancy and birth.
Can you describe the tattoos used in other ancient cultures and how they differ?
Among the numerous ancient cultures who appear to have used tattooing as a permanent form of body adornment, the Nubians to the south of Egypt are known to have used tattoos. The mummified remains of women of the indigenous C-group culture found in cemeteries near Kubban c. 2000-15000 B.C. were found to have blue tattoos, which in at least one case featured the same arrangement of dots across the abdomen noted on the aforementioned female mummies from Deir el-Bahari. The ancient Egyptians also represented the male leaders of the Libyan neighbors c. 1300-1100 B.C. with clear, rather geometrical tattoo marks on their arms and legs and portrayed them in Egyptian tomb, temple and palace scenes.
The Scythian Pazyryk of the Altai Mountain region were another ancient culture which employed tattoos. In 1948, the 2,400 year old body of a Scythian male was discovered preserved in ice in Siberia, his limbs and torso covered in ornate tattoos of mythical animals. Then, in 1993, a woman with tattoos, again of mythical creatures on her shoulders, wrists and thumb and of similar date, was found in a tomb in Altai. The practice is also confirmed by the Greek writer Herodotus c. 450 B.C., who stated that amongst the Scythians and Thracians “tattoos were a mark of nobility, and not to have them was testimony of low birth.”
Accounts of the ancient Britons likewise suggest they too were tattooed as a mark of high status, and with “divers shapes of beasts” tattooed on their bodies, the Romans named one northern tribe “Picti,” literally “the painted people.”
Yet amongst the Greeks and Romans, the use of tattoos or “stigmata” as they were then called, seems to have been largely used as a means to mark someone as “belonging” either to a religious sect or to an owner in the case of slaves or even as a punitive measure to mark them as criminals. It is therefore quite intriguing that during Ptolemaic times when a dynasty of Macedonian Greek monarchs ruled Egypt, the pharaoh himself, Ptolemy IV (221-205 B.C.), was said to have been tattooed with ivy leaves to symbolize his devotion to Dionysus, Greek god of wine and the patron deity of the royal house at that time. The fashion was also adopted by Roman soldiers and spread across the Roman Empire until the emergence of Christianity, when tattoos were felt to “disfigure that made in God’s image” and so were banned by the Emperor Constantine (A.D. 306-373).
We have also examined tattoos on mummified remains of some of the ancient pre-Columbian cultures of Peru and Chile, which often replicate the same highly ornate images of stylized animals and a wide variety of symbols found in their textile and pottery designs. One stunning female figurine of the Naszca culture has what appears to be a huge tattoo right around her lower torso, stretching across her abdomen and extending down to her genitalia and, presumably, once again alluding to the regions associated with birth. Then on the mummified remains which have survived, the tattoos were noted on torsos, limbs, hands, the fingers and thumbs, and sometimes facial tattooing was practiced.
With extensive facial and body tattooing used among Native Americans, such as the Cree, the mummified bodies of a group of six Greenland Inuit women c. A.D. 1475 also revealed evidence for facial tattooing. Infrared examination revealed that five of the women had been tattooed in a line extending over the eyebrows, along the cheeks and in some cases with a series of lines on the chin. Another tattooed female mummy, dated 1,000 years earlier, was also found on St. Lawrence Island in the Bering Sea, her tattoos of dots, lines and hearts confined to the arms and hands.
Evidence for tattooing is also found amongst some of the ancient mummies found in China’s Taklamakan Desert c. 1200 B.C., although during the later Han Dynasty (202 B.C.-A.D. 220), it seems that only criminals were tattooed.
Japanese men began adorning their bodies with elaborate tattoos in the late A.D. 3rd century.
The elaborate tattoos of the Polynesian cultures are thought to have developed over millennia, featuring highly elaborate geometric designs, which in many cases can cover the whole body. Following James Cook’s British expedition to Tahiti in 1769, the islanders’ term “tatatau” or “tattau,” meaning to hit or strike, gave the west our modern term “tattoo.” The marks then became fashionable among Europeans, particularly so in the case of men such as sailors and coal-miners, with both professions which carried serious risks and presumably explaining the almost amulet-like use of anchors or miner’s lamp tattoos on the men’s forearms.
What about modern tattoos outside of the western world?
Modern Japanese tattoos are real works of art, with many modern practioners, while the highly skilled tattooists of Samoa continue to create their art as it was carried out in ancient times, prior to the invention of modern tattooing equipment. Various cultures throughout Africa also employ tattoos, including the fine dots on the faces of Berber women in Algeria, the elaborate facial tattoos of Wodabe men in Niger and the small crosses on the inner forearms which mark Egypt’s Christian Copts.
What do Maori facial designs represent?
In the Maori culture of New Zealand, the head was considered the most important part of the body, with the face embellished by incredibly elaborate tattoos or ‘moko,’ which were regarded as marks of high status. Each tattoo design was unique to that individual and since it conveyed specific information about their status, rank, ancestry and abilities, it has accurately been described as a form of id card or passport, a kind of aesthetic bar code for the face. After sharp bone chisels were used to cut the designs into the skin, a soot-based pigment would be tapped into the open wounds, which then healed over to seal in the design. With the tattoos of warriors given at various stages in their lives as a kind of rite of passage, the decorations were regarded as enhancing their features and making them more attractive to the opposite sex.
Although Maori women were also tattooed on their faces, the markings tended to be concentrated around the nose and lips. Although Christian missionaries tried to stop the procedure, the women maintained that tattoos around their mouths and chins prevented the skin becoming wrinkled and kept them young; the practice was apparently continued as recently as the 1970s.
Why do you think so many cultures have marked the human body and did their practices influence one another?
In many cases, it seems to have sprung up independently as a permanent way to place protective or therapeutic symbols upon the body, then as a means of marking people out into appropriate social, political or religious groups, or simply as a form of self-expression or fashion statement.
Yet, as in so many other areas of adornment, there was of course cross-cultural influences, such as those which existed between the Egyptians and Nubians, the Thracians and Greeks and the many cultures encountered by Roman soldiers during the expansion of the Roman Empire in the final centuries B.C. and the first centuries A.D. And, certainly, Polynesian culture is thought to have influenced Maori tattoos.
As reports and images from European explorers’ travels in Polynesia reached Europe, the modern fascination with tattoos began to take hold. Although the ancient peoples of Europe had practiced some forms of tattooing, it had disappeared long before the mid-1700s. Explorers returned home with tattooed Polynesians to exhibit at world fairs, in lecture halls and in dime museums, to demonstrate the height of European civilization compared to the “primitive natives” of Polynesia. But the sailors on their ships also returned home with their own tattoos.
Native practitioners found an eager clientele among sailors and others visitors to Polynesia. Colonial ideology dictated that the tattoos of the Polynesians were a mark of their primitiveness. The mortification of their skin and the ritual of spilling blood ran contrary to the values and beliefs of European missionaries, who largely condemned tattoos. Although many forms of traditional Polynesian tattoo declined sharply after the arrival of Europeans, the art form, unbound from tradition, flourished on the fringes of European society.
In the United States, technological advances in machinery, design and color led to a unique, all-American, mass-produced form of tattoo. Martin Hildebrandt set up a permanent tattoo shop in New York City in 1846 and began a tradition by tattooing sailors and military servicemen from both sides of the Civil War. In England, youthful King Edward VII started a tattoo fad among the aristocracy when he was tattooed before ascending to the throne. Both these trends mirror the cultural beliefs that inspired Polynesian tattoos: to show loyalty and devotion, to commemorate a great feat in battle, or simply to beautify the body with a distinctive work of art.
The World War II era of the 1940s was considered the Golden Age of tattoo due to the patriotic mood and the preponderance of men in uniform. But would-be sailors with tattoos of naked women weren’t allowed into the navy and tattoo artists clothed many of them with nurses’ dresses, Native-American costumes or the like during the war. By the 1950s, tattooing had an established place in Western culture, but was generally viewed with distain by the higher reaches of society. Back alley and boardwalk tattoo parlors continued to do brisk business with sailors and soldiers. But they often refused to tattoo women unless they were twenty-one, married and accompanied by their spouse, to spare tattoo artists the wrath of a father, boyfriend or unwitting husband.
Today tattooing is recognized as a legitimate art form.
|
Today, tattooing is recognized as a legitimate art form that attracts people of all walks of life and both sexes. Each individual has his or her own reasons for getting a tattoo; to mark themselves as a member a group, to honor loved ones, to express an image of themselves to others. With the greater acceptance of tattoos in the West, many tattoo artists in Polynesia are incorporating ancient symbols and patterns into modern designs. Others are using the technical advances in tattooing to make traditional tattooing safer and more accessible to Polynesians who want to identify themselves with their culture’s past.
Humans have marked their bodies with tattoos for thousands of years. These permanent designs—sometimes plain, sometimes elaborate, always personal—have served as amulets, status symbols, declarations of love, signs of religious beliefs, adornments and even forms of punishment. Joann Fletcher, research fellow in the department of archaeology at the University of York in Britain, describes the history of tattoos and their cultural significance to people around the world, from the famous ” Iceman,” a 5,200-year-old frozen mummy, to today’s Maori.
Source:
- https://www.smithsonianmag.com/history/tattoos-144038580/
- https://www.pbs.org/skinstories/history/beyond.html
- https://www.smithsonianmag.com/travel/tattoos-were-illegal-new-york-city-exhibition-180962232/
What is a Hamburger?
The hamburger is one of the world’s most popular foods, with nearly 50 billion served up annually in the United States alone. Although the humble beef-patty-on-a-bun is technically not much more than 100 years old, it’s part of a far greater lineage, linking American businessmen, World War II soldiers, German political refugees, medieval traders and Neolithic farmers.
The groundwork for the ground-beef sandwich was laid with the domestication of cattle (in Mesopotamia around 10,000 years ago), and with the growth of Hamburg, Germany, as an independent trading city in the 12th century, where beef delicacies were popular.
1121 – 1209 – Genghis Khan (1162-1227), crowned the “emperor of all emperors,” and his army of fierce Mongol horsemen, known as the “Golden Horde,” conquered two thirds of the then known world. The Mongols were a fast-moving, cavalry-based army that rode small sturdy ponies. They stayed in their saddles for long period of time, sometimes days without ever dismounting. They had little opportunity to stop and build a fire for their meal.
The entire village would follow behind the army on great wheeled carts they called “yurts,” leading huge herds of sheep, goats, oxen, and horses. As the army needed food that could be carried on their mounts and eaten easily with one hand while they rode, ground meat was the perfect choice. They would use scrapings of lamb or mutton which were formed into flat patties. They softened the meat by placing them under the saddles of their horses while riding into battle. When it was time to eat, the meat would be eaten raw, having been tenderized by the saddle and the back of the horse.
1238 – When Genghis Khan’s grandson, Khubilai Khan (1215-1294), invaded Moscow, they naturally brought their unique dietary ground meat with them. The Russians adopted it into their own cuisine with the name “Steak Tartare,” (Tartars being their name for the Mongols). Over many years, Russian chefs adapted and developed this dish and refining it with chopped onions and raw eggs.
5th Century
Beginning in the fifteenth century, minced beef was a valued delicacy throughout Europe. Hashed beef was made into sausage in several different regions of Europe.
1600s – Ships from the German port of Hamburg, Germany began calling on Russian port. During this period the Russian steak tartare was brought back to Germany and called “tartare steak.”
18th and 19th Centuries
Jump ahead to 1848, when political revolutions shook the 39 states of the German Confederation, spurring an increase in German immigration to the United States. With German people came German food: beer gardens flourished in American cities, while butchers offered a panoply of traditional meat preparations. Because Hamburg was known as an exporter of high-quality beef, restaurants began offering a “Hamburg-style” chopped steak.
Hamburg Steak:
In the late eighteenth century, the largest ports in Europe were in Germany. Sailors who had visited the ports of Hamburg, Germany and New York, brought this food and term “Hamburg Steak” into popular usage. To attract German sailors, eating stands along the New York city harbor offered “steak cooked in the Hamburg style.”
Immigrants to the United States from German-speaking countries brought with them some of their favorite foods. One of them was Hamburg Steak. The Germans simply flavored shredded low-grade beef with regional spices, and both cooked and raw it became a standard meal among the poorer classes. In the seaport town of Hamburg, it acquired the name Hamburg steak. Today, this hamburger patty is no longer called Hamburg Steak in Germany but rather “Frikadelle,” “Frikandelle” or “Bulette,” orginally Italian and French words.
According to Theodora Fitzgibbon in her book The Food of the Western World – An Encyclopedia of food from North American and Europe:
The originated on the German Hamburg-Amerika line boats, which brought emigrants to America in the 1850s. There was at that time a famous Hamburg beef which was salted and sometimes slightly smoked, and therefore ideal for keeping on a long sea voyage. As it was hard, it was minced and sometimes stretched with soaked breadcrumbs and chopped onion. It was popular with the Jewish emigrants, who continued to make Hamburg steaks, as the patties were then called, with fresh meat when they settled in the U.S.
The cookbooks:
1758 – By the mid-18th century, German immigrants also begin arriving in England. One recipe, titled “Hamburgh Sausage,” appeared in Hannah Glasse’s 1758 English cookbook called The Art of Cookery Made Plain and Easy. It consisted of chopped beef, suet, and spices. The author recommended that this sausage be served with toasted bread. Hannah Glasse’s cookbook was also very popular in Colonial America, although it was not published in the United States until 1805. This American edition also contained the “Hamburgh Sausage” recipe with slight revisions.
1844 – The original Boston Cooking School Cook Book, by Mrs. D.A. Lincoln (Mary Bailey), 1844 had a recipe for Broiled Meat Cakes and also Hamburgh Steak:
Broiled Meat Cakes – Chop lean, raw beef quite fine. Season with salt, pepper, and a little chopped onion, or onion juice. Make it into small flat cakes, and broil on a well-greased gridiron or on a hot frying pan. Serve very hot with butter or Maitre de’ Hotel sauce.
Hamburgh Steak – Pound a slice of round steak enough to break the fibre. Fry two or three onions, minced fine, in butter until slightly browned. Spread the onions over the meat, fold the ends of the meat together, and pound again, to keep the onions in the middle. Broil two or three minutes. Spread with butter, salt, and pepper.
1894 – In the 1894 edition of the book The Epicurean: A Complete Treatise of Analytical & Practical Studies, by Charles Ranhofer (1836-1899), chef at the famous Delmonico’s restaurant in New York, there is a listing for Beef Steak Hamburg Style. The dish is also listed in French as Bifteck Hambourgeoise. What made his version unique was that the recipe called for the ground beef to be mixed with kidney and bone marrow:
One pound of tenderloin beef free of sinews and fat; chop it up on a chopping block with four ounces of beef kidney suet, free of nerves and skin or else the same quantity of marrow; add one ounce of chopped onions fried in butter without attaining color; season all with salt, pepper and nutmeg, and divide the preparation into balls, each one weighing four ounces; flatten them down, roll them in bread-crumbs and fry them in a sautpan in butter. When of a fine color on both sides, dish them up pouring a good thickened gravy . . . over.”
1906 – Upton Sinclair (1878-1968), American novelist, wrote in his book called The Jungle, which told of the horrors of Chicago meat packing plants. This book caused much distrust in the United States regarding chopped meat. Sinclair was surprised that the public missed the main point of his impressionistic fiction and took it to be an indictment of unhygienic conditions of the meat packing industry. This caused people to not trust chopped meat for several years.
Invention of Meat Choppers:
Referring to ground beef as hamburger dates to the invention of the mechanical meat choppers during the 1800s. It was not until the early nineteenth century that wood, tin, and pewter cylinders with wooden plunger pushers became common. Steve Church of Ridgecrest, California uncovered some long forgotten U. S. patents on Meat Cutters:
In mid-19th-century America, preparations of raw beef that had been chopped, chipped, ground or scraped were a common prescription for digestive issues. After a New York doctor, James H. Salisbury suggested in 1867 that cooked beef patties might be just as healthy, cooks and physicians alike quickly adopted the “Salisbury Steak”. Around the same time, the first popular meat grinders for home use became widely available (Salisbury endorsed one called the American Chopper) setting the stage for an explosion of readily available ground beef.
The hamburger seems to have made its jump from plate to bun in the last decades of the 19th century, though the site of this transformation is highly contested. Lunch wagons, fair stands and roadside restaurants in Wisconsin, Connecticut, Ohio, New York and Texas have all been put forward as possible sites of the hamburger’s birth. Whatever its genesis, the burger-on-a-bun found its first wide audience at the 1904 St. Louis World’s Fair, which also introduced millions of Americans to new foods ranging from waffle ice cream cones and cotton candy to peanut butter and iced tea.
Two years later, though, disaster struck in the form of Upton Sinclair’s journalistic novel The Jungle, which detailed the unsavory side of the American meatpacking industry. Industrial ground beef was easy to adulterate with fillers, preservatives and meat scraps, and the hamburger became a prime suspect.
The history of the American burger:
The hamburger might have remained on the seamier margins of American cuisine were it not for the vision of Edgar “Billy” Ingram and Walter Anderson, who opened their first White Castle restaurant in Kansas in 1921. Sheathed inside and out in gleaming porcelain and stainless steel, White Castle countered hamburger meat’s low reputation by becoming bastions of cleanliness, health and hygiene (Ingram even commissioned a medical school study to show the health benefits of hamburgers). His system, which included on-premise meat grinding, worked well and was the inspiration for other national hamburger chains founded in the boom years after World War II: McDonald’s and In-N-Out Burger (both founded in 1948), Burger King (1954) and Wendy’s (1969).
Only one of the claimants below served their hamburgers on a bun – Oscar Weber Bilby in 1891. The rest served them as sandwiches between two slices of bread.
Most of the following stories on the history of the hamburgers were told after the fact and are based on the recollections of family members. For many people, which story or legend you believe probably depends on where you are from. You be the judge! The claims are as follows:
1885 – Charlie Nagreen of Seymour, Wisconsin – At the age of 15, he sold hamburgers from his ox-drawn food stand at the Outagamie County Fair. He went to the Outagamie County Fair and set up a stand selling meatballs. Business wasn’t good and he quickly realized that it was because meatballs were too difficult to eat while strolling around the fair. In a flash of innovation, he flattened the meatballs, placed them between two slices of bread and called his new creation a hamburger. He was known to many as “Hamburger Charlie.” He returned to sell hamburgers at the fair every year until his death in 1951, and he would entertain people with guitar and mouth organ and his jingle:
Hamburgers, hamburgers, hamburgers hot; onions in the middle, pickle on top. Makes your lips go flippity flop.
The town of Seymour, Wisconsin is so certain about this claim that they even have a Hamburger Hall of Fame that they built as a tribute to Charlie Nagreen and the legacy he left behind. The town claims to be “Home of the Hamburger” and holds an annual Burger Festival on the first Saturday of August each year. Events include a ketchup slide, bun toss, and hamburger-eating contest, as well as the “world’s largest hamburger parade.”
On May 9, 2007, members of the Wisconsin legislature declared Seymour, Wisconsin, as the home of the hamburger:
Whereas, Seymour, Wisconsin, is the right home of the hamburger; and,
Whereas, other accounts of the origination of the hamburger trace back only so far as the 1880s, while Seymour’s claim can be traced to 1885; and,
Whereas, Charles Nagreen, also known as Hamburger Charlie, of Seymour, Wisconsin, began calling ground beef patties in a bun “hamburgers” in 1885; and,
Whereas, Hamburger Charlie first sold his world-famous hamburgers at age 15 at the first Seymour Fair in 1885, and later at the Brown and Outagamie county fairs; and,
Whereas, Hamburger Charlie employed as many as eight people at his famous hamburger tent, selling 150 pounds of hamburgers on some days; and,
Whereas, the hamburger has since become an American classic, enjoyed by families and backyard grills alike; now, therefore, be it
Resolved by the assembly, the senate concurring, That the members of the Wisconsin legislature declare Seymour, Wisconsin, the Original Home of the Hamburger.
1885 – The family of Frank and Charles Menches from Akron, Ohio, claim the brothers invented the hamburger while traveling in a 100-man traveling concession circuit at events (fairs, race meetings, and farmers’ picnics) in the Midwest in the early 1880s. During a stop at the Erie County Fair in Hamburg, New York, the brothers ran out of pork for their hot sausage patty sandwiches. Because this happened on a particularly hot day, the local butchers stop slaughtering pigs. The butcher suggested that they substitute beef for the pork. The brothers ground up the beef, mixed it with some brown sugar, coffee, and other spices and served it as a sandwich between two pieces of bread. They called this sandwich the “hamburger” after Hamburg, New York where the fair was being held. According to family legend, Frank didn’t really know what to call it, so he looked up and saw the banner for the Hamburg fair and said, “This is the hamburger.” In Frank’s 1951 obituary in The Los Angeles Times, he is acknowledged him as the ”inventor” of the hamburger.
Hamburg held its first Burgerfest in 1985 to mark the 100th anniversary of the birth of the hamburger after organizers discovered a history book detailing the burger’s origins.
In 1991, Menches and his siblings stumbled across the original recipe among some old papers their great-grandmother left behind. After selling their burgers at county fairs for a few years, the family opened up the Menches Bros. Restaurant in Akron, Ohio. The Menches family is still in the restaurant business and still serving hamburgers in Ohio.
On May 28, 2005, the town of Akron, Ohio hosted the First Annual National Hamburger Festival to celebrate the 120th Anniversary of the invention of the hamburger. The festival will be dedicated to Frank and Charles Menches. That is how sure the city of Akron is on the Menches’ family claim on the contested contention that two residents invented the hamburger. The Ohio legislature is also considering making hamburgers the state food.
1891 – The family of Oscar Weber Bilby claim the first-known hamburger on a bun was served on Grandpa Oscar’s farm just west of Tulsa, Oklahoma in 1891. The family says that Grandpa Oscar was the first to add the bun, but they concede that hamburger sandwiches made with bread may predate Grandpa Oscar’s famous hamburger.
Michael Wallis, travel writer and reporter for Oklahoma Today magazine, did an extensive search in 1995 for the true origins of the hamburger and determined that Oscar Weber Bilby himself was the creator of the hamburger as we know it. According to Wallis’s 1995 article, Welcome To Hamburger Heaven, in an interview with Harold Bilby:
The story has been passed down through the generations like a family Bible. “Grandpa himself told me that it was in June of 1891 when he took up a chunk of iron and made himself a big ol’ grill,” explains Harold. “Then the next month on the Fourth of July he built a hickory wood fire underneath that grill, and when those coals were glowing hot, he took some ground Angus meat and fired up a big batch of hamburgers. When they were cooked all good and juicy, he put them on my Grandma Fanny’s homemade yeast buns – the best buns in all the world, made from her own secret recipe. He served those burgers on buns to neighbors and friends under a grove of pecan trees . . . They couldn’t get enough, so Grandpa hosted another big feed. He did that every Fourth of July, and sometimes as many as 125 people showed up.”
Simple math supports Harold Bilby’s contention that if his Grandpa served burgers on Grandma Fanny’s buns in 1891, then the Bilbys eclipsed the St. Louis World’s Fair vendors by at least thirteen years. That would make Oklahoma the cradle of the hamburger. “There’s not even the trace of a doubt in my mind,” say Harold. “My grandpa invented the hamburger on a bun right here in what became Oklahoma, and if anybody wants to say different, then let them prove otherwise.”
In 1933, Oscar and his son, Leo, opened the family’s first hamburger stand in Tulsa, Oklahoma, called Weber’s Superior Root Beer Stand. They still use the same grill used in 1891, with one minor variation, the wood stove has been converted to natural gas. In a letter to me, Linda Stradley, dated July 31, 2004, Rick Bilby states the following:
My great-grandfather, Oscar Weber Bilby invented the hamburger on July 4, 1891. He served ground beef patties that were seared to perfection on a open flame from a hand-made grill. My great-grandmother Fanny made her own home-made yeast hamburger buns to put around the ground beef patties. They served this new sandwich along with their tasty home-made rood beer which was also carbonated with yeast. People would come for all over the county on July 4th each year to consume and enjoy these treats. To this day we still cook our hamburger on grandpa’s grill, which is now fired by natural gas.
On April 13, 1995, Governor Frank Keating of Oklahoma proclaimed that the real birthplace of the hamburger on the bun, was created and consumed in Tulsa in 1891. The State of Oklahoma Proclamation states:
Whereas, scurrilous rumors have credited Athens, Texas, as the birthplace of the hamburger, claiming for that region south of the Red River commonly known as Baja Oklahoma a fame and renown which are hardly its due; and
Whereas, the Legislature of Baja Oklahoma has gone so far as to declare April 3, 1995, to be Athens Day at the State Capitol, largely on the strength of this bogus claim, and
Whereas, while the residents, the scenery, the hospitality and the food found in Athens are no doubt superior to those in virtually any other locale, they must be recognized. In the words of Mark Twain, as “the lightning bug is to the lightning” when compared with the Great City of Tulsa in the Great State of Oklahoma; and
Whereas, although someone in Athens, in the 1860’s, may have place cooked ground beef between two slices of bread, this minor accomplishment can in no way be regarded comes on a bun accompanied by such delight as pickles, onions, lettuce, tomato, cheese and, in some cases, special sauce; and
Whereas, the first true hamburger on a bun, as meticulous research shows, was created and consumed in Tulsa in 1891 and was only copied for resale at the St. Louis World’s Fair a full 13 years after that momentous and history-making occasion:
Now Therefore, I, Frank Keating, Governor of the State of Oklahoma, do hereby proclaim April 12, 1995, as THE REAL BIRTHPLACE OF THE HAMBURGER IN TULSA DAY.
1900 – Louis Lassen of New Haven, Connecticut is also recorded as serving the first “burger” at his New Haven luncheonette called Louis’ Lunch Wagon. Louis ran a small lunch wagon selling steak sandwiches to local factory workers. A frugal business man, he did not like to waste the excess beef from his daily lunch rush. It is said that he ground up some scraps of beef and served it as a sandwich, the sandwich was sold between pieces of toasted bread, to a customer who was in a hurry and wanted to eat on the run.
Kenneth Lassen, Louis’ grandson, was quoted in the September 25, 1991 Athens Daily Review as saying;
“We have signed, dated and notarized affidavits saying we served the first hamburger sandwiches in 1900. Other people may have been serving the steak but there’s a big difference between a hamburger steak and a hamburger sandwich.”
In the mid-1960s, the New Haven Preservation Trust placed a plaque on the building where Louis’ Lunch is located proclaiming Louis’ Lunch to be the first place the hamburger was sold.
Louis’ Lunch is still selling their hamburgers from a small brick building in New Haven. The sandwich is grilled vertically in antique gas grills and served between pieces of toast rather than a bun, and refuse to provide mustard or ketchup.
Library of Congress named Louis’ Lunch a “Connecticut Legacy.” The following is taken from the Congressional Record, 27 July 2000, page E1377:
Honoring Louis’ Lunch on Its 105th Anniversary – Representative Rosa L. DeLauro:
. . . it is with great pleasure that I rise today to celebrate the 105th anniversary of a true New Haven landmark: Louis’ Lunch. Recently the Lassen family celebrated this landmark as well as the 100th anniversary of their claim to fame — the invention and commercial serving of one of America’s favorites, the hamburger . . . The Lassens and the community of New Haven shared unparalleled excitement when the Library of Congress named Louis’ Lunch a “Connecticut Legacy” — nothing could be more true.
1901 or 1902 – Bert W. Gary of Clarinda, Iowa, in an article by Paige Carlin for the Omaha World Herald newspaper, takes no credit for having invented it, but he stakes uncompromising claim to being the “daddy” of the hamburger industry. He served his hamburger on a bun:
The hamburger business all started about 1901 or 1902 (The Grays aren’t sure which) when Mr. Gray operated a little cafe on the east side of Clarinda’s Courthouse Square.
Mr. Gray recalled: “There was an old German here named Ail Wall (or Wahl, maybe) and he ran a butcher shop. One day he was stuffing bologna with a little hand machine, and he said to me: ‘Bert, why wouldn’t ground meat make a good sandwich?’”
“I said I’d try it, so I took this ground beef and mixed it with an egg batter and fried it. I couldn’t bet anybody to eat it. I quit the egg batter and just took the meat with a little flour to hold it together. The new technique paid off.”
“He almost ran the other cafes out of the sandwich business,” Mrs. Gray put in. “He could make hamburgers so nice and soft and juicy – better than I ever could,” she added.
“This old German, Wall, came over here from Hamburg, and that’s what he said to call it,” Mr. Gray explained. “I sold them for a nickel apiece in those days. That was when the meat was 10 or 12 cents a pound,” he added. “I bought $5 or $6 worth of meat at a time and I got three or four dozen pans of buns from the bakery a day.”
One time the Grays heard a conflicting claim by a man (somewhere in the northern part of the state) that he was the hamburger’s inventor. “I didn’t pay any attention to him,” Mr. Gray snorted. “I’ve got plenty of proof mine was the first,” he said.
so much more to read at https://whatscookingamerica.net/history/hamburgerhistory.htm
Window Tax- aka- Daylight Robbery
It was interesting to learn about the etymology of “Daylight Robbery”- it really prompted me to dig deeper.
When William III was short of money, which he attempted to rectify by the introduction of the much-despised Window Tax. As the name suggests, this was a tax levied on the windows or window-like openings of a property. The details were much amended over time, but the tax was levied originally on all dwellings except cottages. The upper classes, having the largest houses, paid the most. Some wealthy individuals used their ability to pay as a mark of status and demonstrated their wealth by ostentatiously building homes with many windows.
What the Cavendish family, who owned Hardwick Hall (built 1590s), thought about it isn’t recorded. On the one hand, they had cause for complaint – the property was famous for its many windows and light and airy interiors, as celebrated in the rhyme: “Hardwick Hall, more glass than wall”. On the other hand, they were extremely rich and well able to pay.
Taxes are rarely popular, but the Window Tax, which was considered to tax the very stuff of life, that is, light and air, was singled out for particular loathing. People went to great pains to avoid paying it and many windows were bricked up for that reason. Many examples of buildings with brick window panels, sometimes with painted-on trompe l’oeil windows, still survive.
The sight of such windows is so much part of the English architectural folk memory that the example pictured, of a recently built property in Poundbury, Dorset, appears to have been built with fake bricked-up windows, even through the tax itself is long since abolished.
So, that’s the case for the prosecution: the English were robbed of their daylight by the Window Tax. That’s daylight robbery in anyone’s book, so do we need to look any further for the origin of the phrase? Well, yes we do.
Let’s move to the 20th century for the case for the defence. The phrase isn’t known in print until 1916 in Hobson’s Choice, a comic play by Harold Brighouse. Even there the context doesn’t explicitly link it to unfair overcharging or the like. We have to wait until 1949 for a citation that is clearly related to a purchase, in Daniel Marcus Davin’s Roads from Home:
“I can never afford it, said his sister. It’s daylight robbery.”
So, Daylight robbery aka Window Tax.
https://www.phrases.org.uk/meanings/daylight-robbery.html
What Was the Window Tax?
The ‘Window Tax’ was a tax devised by King William III in the 1690s. It was levied on the windows or openings of a building.
The more windows a building had, the more tax they paid. It was essentially a progressive tax whereby the wealthier member of society paid the most as they tended to have larger houses and more windows on those houses.
Indeed, many rich individuals took paying the tax as a badge of honour. The greater tax they paid meant that they were seen as having more wealth and status. In fact, some houses were built with more windows for that specific purpose.
How the Swiss Ruled Chocolate
I came across this exercpt from a very interseting article. You can read the article here:
The Unfinished Dream Behind Amul’s Foray into the Chocolate Industry (thewire.in)
Theobroma Cocoa, food of the gods, had been consumed in Latin America since the Aztec and Mayan times in liquid form, it was the making of the milk chocolate bar that brought it into every person’s reach. Spanish colonisers got chocolate to Europe in 1528 from Mexico and it spread across the continent to reach England by the 1650s. It took another 200 years and an industrial revolution to make the first chocolate bar. J.S. Fry & Sons of Bristol, England made the first solid chocolate bar in 1847 and some 100 miles away in Birmingham, John Cadbury made his eponymous solid chocolate bar, by 1849. It took yet another two and a half decades for milk chocolate to be made, which made chocolate more palatable and pocket friendly. That development took place in Vevey, Switzerland.
Vevey too had become a hub for chocolate factories by the early 1800s. Francois-Louis Cailler started his factory in 1820. Kohler started his factory in 1830. Cailler’s son-in-law Daniel Peter started his factory in 1867, around the same time that his neighbour and friend Henri Nestle started his infant milk food business. Henri Nestle had a hand in the development of milk chocolate in 1875 by Daniel Peter, providing him with condensed milk.
Eventually, all three of them – Cailler and Peter and Kohler – became part of Nestle in 1929. Lindt initially worked at Kohler’s and then set up his chocolate factory in 1879, establishing his own brand. One of Lindt’s initial customers, Jean Tobler, opened his factory in 1899, which eventually launched ‘Toblerone’. Thus, by the turn of the 19th century, the Swiss had taken the lead in milk chocolates, helped in no small measure by a burgeoning dairy industry and the Swiss cow.
Cadbury made milk chocolate only in 1897. Its defining milk chocolate – Cadbury Dairy Milk – came out in 1905. Fry merged with Cadbury in 1919. Elsewhere in Europe, Cacao Barry (France) and Callebaut (Belgium) got into the chocolate business in 1911, while Godiva started in Belgium in 1926.
Across the pond, Milton S. Hershey developed his own formula for milk chocolate and made the Hershey bar in 1900. Frank Mars started his milk chocolate bar in the 1920s and his son, Forrest Sr, started M&M in 1940. Meiji in Japan launched its milk chocolate in 1926.
In the absence of non-disclosure agreements then, because milk chocolates were an innovative product, these food tech startups relied on secrecy and family ties to keep their formulae from being copied. Spying on each other was rampant as portrayed in Roald Dahl’s book Charlie and the Chocolate Factory. Even today Ferrero (started 1946) doesn’t allow cameras or tours in its factory. More than a century later, these brands and companies continue to dominate the $106 billion chocolate market.
Even as the world consumes chocolates worth $106 billion annually, the countries producing cocoa bean get only $8.6 billion – less than 10% of the consumer dollar. In fact, 60% of the worlds cocoa bean is produced in Ghana and Ivory Coast. Farmers growing cocoa beans there struggle for an income of $2/ day and are too poor to eat chocolates that are made from their crops. About 80% of the world’s cocoa, from the top five producing countries, flows to Europe and North America. The inequality in trade is complicated by the presence of middlemen known as trader-grinders. Out of the 4.6 million tonnes of annual cocoa beans production, just three companies – Cargill, Olam and Barry Callebaut – control 60% of the flow. Eight companies control more than 90% of it.
Half a century later, there is a trend of Fairtrade chocolates in the western world. European brands like Divine chocolates, in which a Ghanian farmer’s cooperative Kuapa Kokoo has a 20% stake, represent heart-warming initiatives.
Salary = Salt
A while ago I heard about the history of the word “Salary” being linked to Salt, and so I checked it out-
Well –
Being so valuable, soldiers in the Roman army were sometimes paid with salt instead of money. Their monthly allowance was called “salarium” (“sal” being the Latin word for salt). This Latin root can be recognized in the French word “salaire” — and it eventually made it into the English language as the word “salary.”
A Twist in the Tail- The history of the NeckTie
The first word that I learnt from this research – sartorialists – derived from the word Sartorial (adj) that of or relating to clothing or style or manner of dress.
Textured, solid, striped, botanical, jacquard, geometric, 52 to 58 inches long, alternately withering or widening from 3112 to 5 inches, costing anywhere from three for $10 to $100 or more.
Why has this apparently useless piece of silk, or wool, or rayon, or polyester or even rubber (yes, there are Rubber-Necker Ties, “a recycled fashion statement for the eco-executive”) survived the swings of fashion for more than three centuries? Why is it still fit to be tied?
Fashion observers say the necktie survives because it is the one formal accessory in the male wardrobe that expresses personality, mood or inner character. The tie is that splash of color, that distinctive pattern, that statement of individuality that a man can make in the world of uniform pinstripes and plaids.
The tie has been seen as a form of male chest display, recalling the chest-pounding and puffing of our prehistoric ancestors. Or it can be viewed as the noose around the neck of the conformist white-collar worker, or the symbolic leash held by women who purchased more than 50 percent of the 105 million ties sold in the United States last year. Although most American men do not wear ties daily, U.S. neckware sales totaled $1.6 billion last year, with 70 percent made by American companies.
The necktie originated in the 17th century, during the 30 year war in France. King Louis XIII hired Croatian mercenaries who wore a piece of cloth around their neck as part of their uniform. While these early neckties did serve a function (tying the top of their jackets that is), they also had quite a decorative effect – a look that King Louis was quite fond of. In fact, he liked it so much that he made these ties a mandatory accessory for Royal gatherings, and – to honor the Croatian soldiers – he gave this clothing piece the name “La Cravate” – the name for necktie in French to this day.
International Necktie Day is celebrated on October 18 in Croatia and in various cities around the world, including in Dublin, Tübingen, Como, Tokyo, Sydney and other town
The Evolution of Modern Necktie
The early cravats of the 17th century have little resemblance to today’s necktie, yet it was a style that stayed popular throughout Europe for over 200 years. The tie as we know it today did not emerge until the 1920s but since then has undergone many (often subtle) changes.
In the 2nd century A.D., Roman legionnaires probably didn’t think of themselves as reflecting a trend when they tied bands of cloth around their necks. Most likely, they were just looking for protection from the weather.
Some historians have called the legionnaires’ adornments the first neckwear. But others cite the excavation near the Chinese city of Xi’an of 3rd century B.C. terra-cotta statues of warriors who wore neck scarves in the belief that they were protecting the source of their strength, their Adam’s apples.
Most experts, however, date the initial appearance of the modern precursor of the tie to 1636. Croatian mercenaries, hired in Paris by King Louis XIV, wore cloth bands around their necks to ward off natural elements, which in their line of work included sword slashes.
Parisians quickly translated the Croats’ scarf into a new clothing accessory, and, voila!, the cravate was born. The French term cravate is derived from Croates, French for Croatian. Not to be outdone, the English adapted the cravat, dropping the final “e”, and the American colonies soon stepped in line.
Once launched, the cravat and its styles and knots proliferated. Early cravats looked like lace bibs with bows backing them up, some reaching two yards in length.
Among emerging varieties in the late 17th century was the Steinkirk, a corkscrew-like wrap, originating from the Battle of Steinkirk where startled French officers hastily twisted their ties as they fled their tents to turn back the British onslaught.
During the early 18th century and into the 19th century, cravats had major competition: the stock. While a cravat generally was a long piece of cloth that wound around the neck and tied in front, the stock resembled collars worn today for whiplash or other neck injuries.
Made of muslin, sometimes with cardboard stiffeners inside, stocks were fastened in back by a hook or knot. In front, they had what looked like a pretied bowtie or sometimes a wide cravat covering the stock and swathing the neck like a poultice. Stocks forced men to stand upright in a stiff posture.
American revolutionaries George Washington, Thomas Jefferson and the Adamses (John and John Quincy) can be seen in contemporary portraits by Gilbert Stuart and Charles Willson Peale, wearing swath-like cravats, although the American versions were less radical than those of their counterparts in France.
In the mid-1800s, the “solitaire” appeared — attached to the wig in the back, wrapped around the neck and brought to a bow in the front over a cravat.
Some other bizarre dress and tie styles emerged in the mid-18th century. In England, the so-called “Macaronis” were dandies affecting an Italian style, coloring their cheeks with rouge and wearing diamond-studded pumps and cravats with huge bows. The fashion may be alluded to in the lyrics to “Yankee Doodle Dandy.”
History of Christmas Tree
My son asked me about the Christmas tree and I realized – I didn’t know! So here goes – a little bit of history about the Christmas tree: Interesting fact:
In 1659, the General Court of Massachusetts enacted a law making any observance of December 25 (other than a church service) a penal offense; people were fined for hanging decorations. That stern solemnity continued until the 19th century, when the influx of German and Irish immigrants undermined the Puritan legacy.
The history of Christmas trees goes back to the symbolic use of evergreens in ancient Egypt and Rome and continues with the German tradition of candlelit Christmas trees first brought to America in the 1800s. Discover the history of the Christmas tree, from the earliest winter solstice celebrations to Queen Victoria’s decorating habits and the annual lighting of the Rockefeller Center tree in New York City.
How Did Christmas Trees Start?
Long before the advent of Christianity, plants and trees that remained green all year had a special meaning for people in the winter. Just as people today decorate their homes during the festive season with pine, spruce, and fir trees, ancient peoples hung evergreen boughs over their doors and windows. In many countries it was believed that evergreens would keep away witches, ghosts, evil spirits, and illness.
In the Northern hemisphere, the shortest day and longest night of the year falls on December 21 or December 22 and is called the winter solstice. Many ancient people believed that the sun was a god and that winter came every year because the sun god had become sick and weak. They celebrated the solstice because it meant that at last the sun god would begin to get well. Evergreen boughs reminded them of all the green plants that would grow again when the sun god was strong and summer would return.
The ancient Egyptians worshiped a god called Ra, who had the head of a hawk and wore the sun as a blazing disk in his crown. At the solstice, when Ra began to recover from his illness, the Egyptians filled their homes with green palm rushes, which symbolized for them the triumph of life over death.
Early Romans marked the solstice with a feast called Saturnalia in honor of Saturn, the god of agriculture. The Romans knew that the solstice meant that soon, farms and orchards would be green and fruitful. To mark the occasion, they decorated their homes and temples with evergreen boughs.
In Northern Europe the mysterious Druids, the priests of the ancient Celts, also decorated their temples with evergreen boughs as a symbol of everlasting life. The fierce Vikings in Scandinavia thought that evergreens were the special plant of the sun god, Balder.
Christmas Trees From Germany
Germany is credited with starting the Christmas tree tradition as we now know it in the 16th century when devout Christians brought decorated trees into their homes. Some built Christmas pyramids of wood and decorated them with evergreens and candles if wood was scarce. It is a widely held belief that Martin Luther, the 16th-century Protestant reformer, first added lighted candles to a tree. Walking toward his home one winter evening, composing a sermon, he was awed by the brilliance of stars twinkling amidst evergreens. To recapture the scene for his family, he erected a tree in the main room and wired its branches with lighted candles.
Who Brought Christmas Trees to America?
Most 19th-century Americans found Christmas trees an oddity. The first record of one being on display was in the 1830s by the German settlers of Pennsylvania, although trees had been a tradition in many German homes much earlier. The Pennsylvania German settlements had community trees as early as 1747. But, as late as the 1840s Christmas trees were seen as pagan symbols and not accepted by most Americans.
It is not surprising that, like many other festive Christmas customs, the tree was adopted so late in America. To the New England Puritans, Christmas was sacred. The pilgrims’s second governor, William Bradford, wrote that he tried hard to stamp out “pagan mockery” of the observance, penalizing any frivolity. The influential Oliver Cromwell preached against “the heathen traditions” of Christmas carols, decorated trees, and any joyful expression that desecrated “that sacred event.” In 1659, the General Court of Massachusetts enacted a law making any observance of December 25 (other than a church service) a penal offense; people were fined for hanging decorations. That stern solemnity continued until the 19th century, when the influx of German and Irish immigrants undermined the Puritan legacy.
In 1846, the popular royals, Queen Victoria and her German Prince, Albert, were sketched in the Illustrated London News standing with their children around a Christmas tree. Unlike the previous royal family, Victoria was very popular with her subjects, and what was done at court immediately became fashionable—not only in Britain, but with fashion-conscious East Coast American Society. The Christmas tree had arrived.
By the 1890s Christmas ornaments were arriving from Germany and Christmas tree popularity was on the rise around the U.S. It was noted that Europeans used small trees about four feet in height, while Americans liked their Christmas trees to reach from floor to ceiling.
Christmas Trees Around the World
Christmas Trees in Canada
German settlers migrated to Canada from the United States in the 1700s. They brought with them many of the things associated with Christmas we cherish today—Advent calendars, gingerbread houses, cookies—and Christmas trees. When Queen Victoria’s German husband, Prince Albert, put up a Christmas tree at Windsor Castle in 1848, the Christmas tree became a tradition throughout England, the United States, and Canada.
Christmas Trees in Mexico
In most Mexican homes the principal holiday adornment is el Nacimiento (Nativity scene). However, a decorated Christmas tree may be incorporated in the Nacimiento or set up elsewhere in the home. As purchase of a natural pine represents a luxury commodity to most Mexican families, the typical arbolito (little tree) is often an artificial one, a bare branch cut from a copal tree (Bursera microphylla) or some type of shrub collected from the countryside.
Christmas Trees in Great Britain
The Norway spruce is the traditional species used to decorate homes in Britain. The Norway spruce was a native species in the British Isles before the last Ice Age, and was reintroduced here before the 1500s.
Christmas Trees in Greenland
Christmas trees are imported, as no trees live this far north. They are decorated with candles and bright ornaments.
Christmas Trees in Guatemala
The Christmas tree has joined the “Nacimiento” (Nativity scene) as a popular ornament because of the large German population in Guatemala. Gifts are left under the tree on Christmas morning for the children. Parents and adults do not exchange gifts until New Year’s Day.
Christmas Trees in Brazil
Although Christmas falls during the summer in Brazil, sometimes pine trees are decorated with little pieces of cotton that represent falling snow.
Christmas Trees in Ireland
Christmas trees are bought anytime in December and decorated with colored lights, tinsel, and baubles. Some people favor the angel on top of the tree, others the star. The house is decorated with garlands, candles, holly, and ivy. Wreaths and mistletoe are hung on the door.
Christmas Trees in Sweden
Most people buy Christmas trees well before Christmas Eve, but it’s not common to take the tree inside and decorate it until just a few days before. Evergreen trees are decorated with stars, sunbursts, and snowflakes made from straw. Other decorations include colorful wooden animals and straw centerpieces.
Christmas Trees in Norway
Nowadays Norwegians often take a trip to the woods to select a Christmas tree, a trip that their grandfathers probably did not make. The Christmas tree was not introduced into Norway from Germany until the latter half of the 19th century; to the country districts it came even later. When Christmas Eve arrives, there is the decorating of the tree, usually done by the parents behind the closed doors of the living room, while the children wait with excitement outside. A Norwegian ritual known as “circling the Christmas tree” follows, where everyone joins hands to form a ring around the tree and then walk around it singing carols. Afterwards, gifts are distributed.
Christmas Trees in Ukraine
Celebrated on December 25th by Catholics and on January 7th by Orthodox Christians, Christmas is the most popular holiday in the Ukraine. During the Christmas season, which also includes New Year’s Day, people decorate fir trees and have parties.
Christmas Trees in Spain
A popular Christmas custom is Catalonia, a lucky strike game. A tree trunk is filled with goodies and children hit at the trunk trying to knock out the hazel nuts, almonds, toffee, and other treats.
Christmas Trees in Italy
In Italy, the presepio (manger or crib) represents in miniature the Holy Family in the stable and is the center of Christmas for families. Guests kneel before it and musicians sing before it. The presepio figures are usually hand-carved and very detailed in features and dress. The scene is often set out in the shape of a triangle. It provides the base of a pyramid-like structure called the ceppo. This is a wooden frame arranged to make a pyramid several feet high. Several tiers of thin shelves are supported by this frame. It is entirely decorated with colored paper, gilt pine cones, and miniature colored pennants. Small candles are fastened to the tapering sides. A star or small doll is hung at the apex of the triangular sides. The shelves above the manger scene have small gifts of fruit, candy, and presents. The ceppo is in the old Tree of Light tradition which became the Christmas tree in other countries. Some houses even have a ceppo for each child in the family.
Christmas Trees in Germany
Many Christmas traditions practiced around the world today started in Germany.
It has long been thought that Martin Luther began the tradition of bringing a fir tree into the home. According to one legend, late one evening, Martin Luther was walking home through the woods and noticed how beautifully the stars shone through the trees. He wanted to share the beauty with his wife, so he cut down a fir tree and took it home. Once inside, he placed small, lighted candles on the branches and said that it would be a symbol of the beautiful Christmas sky. The Christmas tree was born.
Another legend says that in the early 16th century, people in Germany combined two customs that had been practiced in different countries around the globe. The Paradise tree (a fir tree decorated with apples) represented the Tree of Knowledge in the Garden of Eden. The Christmas Light, a small, pyramid-like frame, usually decorated with glass balls, tinsel and a candle on top, was a symbol of the birth of Christ as the Light of the World. Changing the tree’s apples to tinsel balls and cookies and combining this new tree with the light placed on top, the Germans created the tree that many of us know today.
Modern Tannenbaum (Christmas trees) are traditionally decorated in secret with lights, tinsel and ornaments by parents and then lit and revealed on Christmas Eve with cookies, nuts and gifts under its branches.
Christmas Trees in South Africa
Christmas is a summer holiday in South Africa. Although Christmas trees are not common, windows are often draped with sparkling cotton wool and tinsel.
Christmas Trees in Saudi Arabia
Christian Americans, Europeans, Indians, Filipinos, and others living here have to celebrate Christmas privately in their homes. Christmas lights are generally not tolerated. Most families place their Christmas trees somewhere inconspicuous.
Christmas Trees in Philippines
Fresh pine trees are too expensive for many Filipinos, so handmade trees in an array of colors and sizes are often used. Star lanterns, or parol, appear everywhere in December. They are made from bamboo sticks, covered with brightly colored rice paper or cellophane, and usually feature a tassel on each point. There is usually one in every window, each representing the Star of Bethlehem.
Christmas Trees in China
Of the small percentage of Chinese who do celebrate Christmas, most erect artificial trees decorated with spangles and paper chains, flowers, and lanterns. Christmas trees are called “trees of light.”
Christmas Trees in Japan
For most of the Japanese who celebrate Christmas, it’s purely a secular holiday devoted to the love of their children. Christmas trees are decorated with small toys, dolls, paper ornaments, gold paper fans and lanterns, and wind chimes. Miniature candles are also put among the tree branches. One of the most popular ornaments is the origami swan. Japanese children have exchanged thousands of folded paper “birds of peace” with young people all over the world as a pledge that war must not happen again.
Christmas Tree Trivia
Christmas trees have been sold commercially in the United States since about 1850.
In 1979, the National Christmas Tree was not lighted except for the top ornament. This was done in honor of the American hostages in Iran.
Between 1887-1933 a fishing schooner called the Christmas Ship would tie up at the Clark Street bridge and sell spruce trees from Michigan to Chicagoans.
The tallest living Christmas tree is believed to be the 122-foot, 91-year-old Douglas fir in the town of Woodinville, Washington.
The Rockefeller Center Christmas tree tradition began in 1933. Franklin Pierce, the 14th president, brought the Christmas tree tradition to the White House.
In 1923, President Calvin Coolidge started the National Christmas Tree Lighting Ceremony now held every year on the White House lawn.
Since 1966, the National Christmas Tree Association has given a Christmas tree to the President and first family.
Most Christmas trees are cut weeks before they get to a retail outlet.
In 1912, the first community Christmas tree in the United States was erected in New York City.
Christmas trees generally take six to eight years to mature.
Christmas trees are grown in all 50 states including Hawaii and Alaska.
100,000 people are employed in the Christmas tree industry.
98 percent of all Christmas trees are grown on farms.
More than 1,000,000 acres of land have been planted with Christmas trees.
77 million Christmas trees are planted each year.
On average, over 2,000 Christmas trees are planted per acre.
You should never burn your Christmas tree in the fireplace. It can contribute to creosote buildup.
Other types of trees such as cherry and hawthorns were used as Christmas trees in the past.
Thomas Edison’s assistants came up with the idea of electric lights for Christmas trees.
In 1963, the National Christmas Tree was not lit until December 22nd because of a national 30-day period of mourning following the assassination of President Kennedy.
Teddy Roosevelt banned the Christmas tree from the White House for environmental reasons.
In the first week, a tree in your home will consume as much as a quart of water per day.
Tinsel was once banned by the government. Tinsel contained lead at one time. Now it’s made of plastic.
In 1984, the National Christmas Tree was lit on December 13th with temperatures in the 70s, making it one of the warmest tree lightings in history.
34 to 36 million Christmas trees are produced each year and 95 percent are shipped or sold directly from Christmas tree farms.
California, Oregon, Michigan, Washington, Wisconsin, Pennsylvania and North Carolina are the top Christmas tree producing states.
The best-selling trees are Scotch Pine, Douglas Fir, Fraser Fir, Balsam Fir and White Pine.
Worlds longest Flight
No- it is not from Sydney to London at 18hours.
It is 64 Days, 22 Hrs, 19mins and 5 seconds.
That is over two months in a Cessna 172, flying twenty four hours a day, without even landing for fuel. That’s exactly what two pilots did back in 1958 in the California and Nevada desert. Bob Timm and John Cook set a world endurance record, remaining airborne for just under 65 days. It was a publicity flight, sponsored by the Hacienda Hotel in Las Vegas.
A stock Cessna 172 was purchased, then modified for the flight. Although the Continental engine was basically untouched, two oil systems, filters, and a 95 gallon fuel tank were installed. The oil could be changed and the plane refueled without shutting down the engine. Except for the pilot seat, the interior was gutted, then re-done to include a mattress and a sink.
The right side door was collapsible, providing access to the exterior and enabling the co-pilot to operate a winch for bringing supplies aboard from below. Re-fueling and re-supplying the airplane were the tricky parts. Twice daily, the plane was flown just above a speeding truck from which a hose was hoisted up to pump 95 gallons of avgas into the belly tank. Food, water and Other supplies were lifted up from the truck as well.
The Cessna 172 was sold to a Canadian pilot, but was eventually brought back to Nevada, where it now hangs from the ceiling at McCarran International Airport.
The entire story of this flight, and the record which stands to this day, is available to read at the Howard W. Cannon Aviation Museum at McCarran Airport in Las Vegas.
My favourite Youtube channel
https://www.aopa.org/news-and-media/all-news/2008/march/01/endurance-test-circa-1958