A Tapestry of Transformation: From Military Kid to Nasdaq’s Opening Bell

The resounding clang of the Nasdaq opening bell on that crisp January morning in 2024 was more than a simple sound; it was a symphony of personal and professional triumph, a crescendo that reverberated through the corridors of my life. Standing on that iconic podium, a wave of emotions washed over me as I reflected upon the intricate tapestry of experiences that had led me to this pivotal moment. It was a journey marked by resilience, adaptability, and an unyielding pursuit of knowledge and innovation, a testament to the transformative power of unconventional paths and the unwavering spirit of the human will.

The Foundation: A Nomadic Upbringing

My story begins with a childhood defined by constant change and upheaval. As the son of a military officer, I became accustomed to a life of frequent relocations, adapting to new environments and cultures with remarkable speed and agility. In the first four decades of my life, I moved an astounding 24 times, traversing the vast expanse of the United States and even venturing to India. While this nomadic lifestyle presented its share of challenges, it also instilled in me a profound sense of adaptability, resourcefulness, and the ability to forge connections with people from all walks of life.

Each move was a fresh start, an opportunity to reinvent myself and embrace new experiences. I lived in bustling metropolises and quaint rural towns, interacted with individuals from diverse socioeconomic backgrounds, and witnessed firsthand the kaleidoscope of human experiences that shaped our world. This exposure broadened my horizons, challenged my preconceived notions, and cultivated a deep appreciation for the richness and complexity of our global community.

My academic journey mirrored the nomadic nature of my upbringing. In pursuit of a bachelor’s degree in physics, a postgraduate diploma in marketing, and an MBA, I attended nine different educational institutions across two continents. This constant shuffling of schools meant that I never truly established roots in a single place, but it also honed my ability to assimilate into new social circles, navigate different educational systems, and adapt to varying teaching styles. I became a chameleon of sorts, blending seamlessly into new environments and building rapport with classmates and teachers, regardless of their backgrounds or personalities.

The Unconventional Path: A Mosaic of Experiences

My career trajectory was equally unconventional, a mosaic of experiences that spanned diverse industries and roles. I delved into the world of marketing consultancy, honing my skills in strategic planning and brand development. I immersed myself in the realm of product management, gaining insights into the intricate dance between customer needs and technological innovation. I ventured into business development, mastering the art of forging partnerships and cultivating mutually beneficial relationships.

Each experience was a stepping stone, a building block in the foundation of my professional identity. I learned to embrace ambiguity, to navigate complex challenges, and to identify opportunities where others saw obstacles. I developed a deep appreciation for the power of collaboration, recognizing that the greatest achievements are often the result of collective effort.

The Entrepreneurial Spirit: A Burning Desire to Innovate

Throughout my career, I was drawn to the allure of entrepreneurship and innovation, inspired by the stories of visionary leaders who had disrupted their industries and left an indelible mark on the world. I yearned to be part of something bigger than myself, to create something that would have a lasting impact on society. This burning desire led me to the world of Special Purpose Acquisition Companies (SPACs), a relatively new and rapidly evolving financial instrument that offered a unique opportunity to combine my entrepreneurial drive with my experience in capital markets.

SPACs, I discovered, were a powerful tool for democratizing access to investment opportunities, empowering innovative companies, and creating value for all stakeholders. They were a blank canvas upon which I could paint my vision of a future where entrepreneurship and social impact could coexist and thrive.

The Zoomcar Merger: A Defining Moment

The culmination of my SPAC journey came with the Zoomcar merger, a landmark transaction that would forever alter the trajectory of my career. Zoomcar, a trailblazing car-sharing platform that had established a dominant presence in emerging markets, resonated deeply with my values and aspirations. Their innovative business model, which leveraged technology to provide affordable and accessible mobility solutions, coupled with their unwavering commitment to social impact, aligned perfectly with my investment thesis.

The path to a successful merger was fraught with challenges. We encountered volatile market conditions, regulatory complexities in multiple jurisdictions, and the inherent difficulties of integrating two distinct corporate cultures. But through it all, our team remained steadfast in its pursuit of a mutually beneficial outcome. We worked tirelessly, leveraging our collective expertise in finance, law, technology, and operations. We engaged in countless hours of negotiation, carefully crafting deal terms that would protect the interests of all stakeholders.

The ringing of the opening bell at Nasdaq was a moment of profound significance, a testament to the power of perseverance, collaboration, and the unwavering belief in a vision. It was a celebration of the entrepreneurial spirit that drives innovation and progress, a reminder that even the most ambitious dreams can be realized through unwavering dedication and a steadfast commitment to one’s values.

The Ripple Effect: Inspiring a New Generation of Leaders

The success of the Zoomcar merger had a ripple effect throughout the SPAC industry, inspiring renewed optimism and fueling a resurgence of interest in this innovative financing vehicle. It served as a powerful example of how SPACs could be used to support companies with a strong commitment to social impact, creating a win-win scenario for investors and society alike.

How Theatre Helps Develop Essential Life Skills


Theatre is often perceived as merely an art form or a means of entertainment. However, my personal experiences in theatre during high school and college have taught me invaluable lessons that have shaped me into a better professional and leader. The skills I acquired through theatre have been invaluable in my journey as a successful operator and Chief Operating Officer (COO).

The Roman Epic Experience
My first encounter with theatre was during high school when I acted in a production of the Roman epic “Caligula.” The experience was unique and eye-opening. The rigorous rehearsal process instilled in me the importance of commitment, discipline, and attention to detail – qualities that would later prove essential in my professional life.

Discovering My True Passion
During college, I continued exploring theatre, but after a couple of acting productions, I realized that my true passion lay backstage. The chaos of a production required a semblance of sanity, and a great stage manager had to have everything planned and accounted for, with contingencies in place for unexpected situations.

The Disaster that Taught Me Resilience
I vividly remember the first show I stage-managed – it was a disaster! The technical rehearsal saw every possible thing go wrong – the backdrop fell, props were missing, and an irate parent even hurled abuses at me because her child “didn’t get to shine” due to my perceived shortcomings. This experience taught me a valuable lesson in resilience and the importance of preparation.

The Production Book: My Secret Weapon
From that low point, I vowed to improve and started meticulously documenting every aspect of the production in my “production book.” This book became synonymous with my identity – people associated me with my book, my utility jacket with a thousand pockets, and my all-black outfit (which eventually became my go-to wardrobe).

The Six Pillars of Stagecraft and Life
Through my theatre experiences, I developed a six-pillar approach that has served me well in both stagecraft and corporate life:

1. Understand: Sitting through rehearsals allowed me to grasp the rhythm and cadence of the show, much like understanding the pace of development in the corporate world.

2. Prepare: Ensuring that all necessary props and tools were in place and positioned correctly, just as having the right resources and tools is crucial in a corporate setting.

3. Plan: Meticulously planning the movement of props, actors, and crew members, akin to strategically allocating resources within an organization for optimal efficiency.

4. Practice: Rehearsing scene changes, movements, and team coordination, just as practice and preparation are essential for successful execution in any professional endeavor.

5. Feedback: Seeking input from actors and crew members to identify areas for improvement, mirroring the importance of open communication and feedback loops in a corporate environment.

6. Execute: When the show begins, all the training and practice come into play, seamlessly executing the plan while being prepared for contingencies – a skill that directly translates to managing operations and addressing issues in the corporate world.

The Lasting Impact of Theatre
My stint in theatre has had a profound and lasting impact on my personal and professional life. The lessons learned backstage – commitment, discipline, attention to detail, resilience, preparation, planning, practice, feedback, and execution – have made me the best COO I could be across various organizations. Theatre has truly been a transformative experience, equipping me with essential life skills that have been invaluable in my journey as a successful operator and leader.

March 20th – 47 Ronin

The 47 Ronin, Warriors of Ako, committed seppuku on this day, March 20.
The story began in 1701, when the Lord of Ako, Asano Nagamori, attacked the Chief of Protocol, Lord Kira within the grounds of Edo Castle, for which he was ordered to commit seppuku. Asano’s lands at Ako (now part of Hyogo Prefecture) were confiscated, and his over 300 samurai were forced to disband.
On the night of Tuesday, January 30, 1703, (14th day of the 12th month by the old Japanese calendar, and the date by which the event is still remembered in Japan) 47 of the former men of Ako stormed the mansion of Kira Yoshinaka, killing the 62-year-old Chief of Protocol.
Having cut off the man’s head, they carried it about 14 km through the streets of Edo to the grave of their former master, Lord Asano, at the Sengaku-ji, a temple in the southern districts of Edo. Then having paid their respects before the grave, they turned themselves in to the authorities.
Although they defied orders prohibiting revenge, they had exemplified the way of the samurai. The Shogunate spent weeks discussing the pros and cons of their actions, before deciding to allow them to commit seppuku rather than be executed.
According to the story, they committed seppuku in the grounds of the Sengaku-ji Temple upon completion of their task.
In fact, the 46 men were separated and billeted out to the homes of various daimyo in Edo at the time while the Bakufu decided upon their fate.
Oishi Kuranosuke, leader of the 47 Ronin, and 16 of the Ronin were sent to Lord of Higo, (Kumamoto) Hosokawa Tatsunari’s mansion, located in modern day Minato-ku of Tokyo. A monument and the site of the mass seppuku has been preserved.
Oishi’s son was confined to the home of Matsudaira Sadanao, and the two had been allowed to meet on the evening prior to their deaths. On February 3, 1703, the Bakufu had issued orders that the men, being held in various daimyo homes, were to commit seppuku the following day. Four locations around Edo were decided on, and hastily prepared for the following days actions. Accepting the sentences as an honor, on Tuesday, March 20th, 1703, they performed the seppuku rituals.
Once the men had redeemed themselves through self-destruction, their decapitated bodies were folded into a fetal position, with their heads placed on their knees inside a round wooden tub-like coffin, and carried to the Sengaku-ji where they were buried.
On the gravestones of all the ronon is the kanji for “yaiba”, written刃 except for one, the ashigaru class foot soldier, Terasaka Kichiemon Nobuyuki, the 刃 kanji does not appear on his stone.
Terasaka was an ashigaru to the Ronin Yoshida clan. At the time of the attack, Terasaka was sent by Oishi Kuranosuke to inform the remaining Asano clan, including Lord Asano’s widow in modern-day Hyogo Prefecture, that the band of 47 Ronin had sought revenge on the death of their master. Because of his actions, he was pardoned by the shogun. There are claims that he was pardoned because of his young age, however, Oishi Kuranosuke’s son was at 14 or 15, even younger than Terasaka. It is probable that his rank, as ashigaru, did not include him having to commit seppuku.
Terasaka died 43 years after the incident aged 83. (some sources list 78, however a letter by his grandson survives in Kochi city confirming his grandfather’s age) He was later buried alongside his comrades in arms, however the kanji 刃 does not appear on his gravestone. He is reported to have become a Buddhist priest, serving at the Sengaku-ji and tending to the graves of his comrades following the incident.
Source: Facebook

New rules around Special Purpose Acquisition Companies


The recent SEC rule changes for SPACs have sparked conversations across the investment landscape. While some view them as a blow to these “blank check companies,” I, like many others, remain optimistic about the future of SPACs as a valuable tool for entering the public market.

The new focus on enhanced disclosures and stricter projection guidelines is undoubtedly a positive step. Transparency is crucial for investor trust, and by requiring SPACs to shed more light on their operations, compensation structures, and target companies, the SEC is promoting more informed decision-making.
However, as a SPAC operator myself, I believe the underlying power of this instrument remains strong. Let’s demystify the advantages:

1. A Smoother Path to Public Visibility: Compared to the traditional IPO marathon, SPACs offer a more controlled and predictable journey. Costs are upfront and transparent, eliminating the uncertainty of fundraising rounds. Moreover, having completed much of the due diligence during the SPAC formation stage, companies merging with SPACs can hit the ground running as publicly traded entities. This “IPO-as-a-Service” approach streamlines the process and minimizes disruption to business operations.
2. Leveraging Financial Strength: Strong balance sheets give companies an edge in navigating the capital markets. This pre-IPO stability translates into bolder and more achievable future projections, setting a solid foundation for their public life. With less financial pressure after merging, companies can focus on execution and delivering on their promises, building investor confidence.
3. Focus on Fundamentals over Hype: The increased disclosure requirements mandated by the SEC will help shift the focus from speculative projections to the company’s core strengths and potential. Investors will have access to richer data and deeper insights, enabling them to make informed decisions based on actual realities rather than hypothetical “moonshot” scenarios.

Of course, adapting to the new regulatory landscape will require strategic adjustments. As a recent SPAC operator who prioritized transparency throughout the merger process, I am confident that embracing these changes will ultimately strengthen the SPAC ecosystem and foster healthier, more sustainable ventures.

In conclusion, while the new SEC rules undoubtedly reshape the SPAC landscape, they represent an opportunity for growth and maturity. By embracing transparency, leveraging financial stability, and focusing on fundamentals, SPACs can remain a powerful tool for companies and investors alike. Let’s approach this evolution with optimism and work together to build a better, more informed public market for the future.

________

Disclaimer: The views and opinions expressed in this blog post are solely my own and do not necessarily reflect the official policy or position of any organization I am associated with. These views are personal and are provided for informational purposes only. The information presented here is accurate and true to the best of my knowledge, but there may be omissions, errors, or mistakes. Any reliance you place on the information from this blog/post is strictly at your own risk. I reserve the right to change, update, or remove content at any time. The content provided is my own opinion and not intended to malign any individual, group, organization, or anyone or anything.

The Last Samurai

While I am a big fan of the film, the last samurai, while watching it, there were a lot of parallels between history and real life, I was curious about the link and dug a little further. The Meji restoration in Japan was an interesting historical point in Japan’s history. The story of the last samurai also draws on those parallels.

I found the story of Jules Brunet.  https://en.wikipedia.org/wiki/Jules_Brunet

Jules Brunet was sent to Japan to train their military in Western tactics before fighting for the samurai against Meiji Imperialists during the Boshin War.

Not many people know the true story of The Last Samurai, the sweeping Tom Cruise epic of 2003. His character, the noble Captain Algren, was actually primarily based on a real person: the French officer Jules Brunet.

Brunet was sent to Japan to train soldiers on how to use modern weapons and tactics. He later chose to stay and fight alongside the Tokugawa samurai in their resistance against Emperor Meiji and his move to modernize Japan.

But how much of this reality is represented in the blockbuster?

The True Story Of The The Last Samurai: The Boshin War

Japan of the 19th century was an isolated nation. Contact with foreigners was largely suppressed. But everything changed in 1853 when American naval commander Matthew Perry appeared in Tokyo’s harbor with a fleet of modern ships.

Painting Of Samurai Troops

Wikimedia CommonsA painting of samurai rebel troops done by none other than Jules Brunet. Notice how the samurai have both western and traditional equipment, a point of the true story of The Last Samurai not explored in the movie.

For the first time ever, Japan was forced to open itself up to the outside world. The Japanese then signed a treaty with the U.S. the following year, the Kanagawa Treaty, which allowed American vessels to dock in two Japanese harbors. America also established a consul in Shimoda.

The event was a shock to Japan and consequently split its nation on whether it should modernize with the rest of the world or remain traditional. Thus followed the Boshin War of 1868-1869, also known as the Japanese Revolution, which was the bloody result of this split.

On one side was Japan’s Meiji Emperor, backed by powerful figures who sought to Westernize Japan and revive the emperor’s power. On the opposing side was the Tokugawa Shogunate, a continuation of the military dictatorship comprised of elite samurai which had ruled Japan since 1192.

Although the Tokugawa shogun, or leader, Yoshinobu, agreed to return power to the emperor, the peaceful transition turned violent when the Emperor was convinced to issue a decree that dissolved the Tokugawa house instead.

The Tokugawa shogun protested which naturally resulted in war. As it happens, 30-year-old French military veteran Jules Brunet was already in Japan when war broke out.

Satsuma And Choshu Samurai

Wikimedia Commons Samurai of the Choshu clan during the Boshin War in late 1860s Japan.

Jules Brunet’s Role In The True Story Of The Last Samurai

Born on January 2, 1838, in Belfort, France, Jules Brunet followed a military career specializing in artillery. He first saw combat during the French intervention in Mexico from 1862 to 1864 where he was awarded the Légion d’honneur — the highest French military honor.

Jules Brunet The Real Last Samurai

Wikimedia Commons Jules Brunet in full military dress in 1868.

Then, in 1867, Japan’s Tokugawa Shogunate requested help from Napoleon III’s Second French Empire in modernizing their armies. Brunet was sent as the artillery expert alongside a team of other French military advisors.

The group was to train the shogunate’s new troops on how to use modern weapons and tactics. Unfortunately for them, a civil war would break out just a year later between the shogunate and the imperial government.

On January 27, 1868, Brunet and Captain André Cazeneuve — another French military advisor in Japan — accompanied the shogun and his troops on a march to Japan’s capital city of Kyoto.

The shogun’s army was to deliver a stern letter to the Emperor to reverse his decision to strip the Tokugawa shogunate, or the longstanding elite, of their titles and lands.

However, the army was not allowed to pass and troops of the Satsuma and Choshu feudal lords — who were the influence behind the Emperor’s decree — were ordered to fire.

Thus began the first conflict of the Boshin War known as The Battle of Toba-Fushimi. Although the shogun’s forces had 15,000 men to the Satsuma-Choshu’s 5,000, they had one critical flaw: equipment.

While most of the imperial forces were armed with modern weapons such as rifles, howitzers, and Gatling guns, many of the shogunate’s soldiers were still armed with outdated weapons such as swords and pikes, as was the samurai custom.

The battle lasted for four days, but was a decisive victory for the imperial troops, leading many Japanese feudal lords to switch sides from the shogun to the emperor. Brunet and the Shogunate’s Admiral Enomoto Takeaki fled north to the capital city of Edo (modern-day Tokyo) on the warship Fujisan.

Living With The Samurai

Around this time, foreign nations — including France — vowed neutrality in the conflict. Meanwhile, the restored Meiji Emperor ordered the French advisor mission to return home, since they had been training the troops of his enemy — the Tokugawa Shogunate.

Samurai Armor

Wikimedia Commons The full samurai battle regalia a Japanese warrior would wear to war. 1860.

While most of his peers agreed, Brunet refused. He chose to stay and fight alongside the Tokugawa. The only glimpse into Brunet’s decision comes from a letter he wrote directly to French Emperor Napoleon III. Aware that his actions would be seen as either insane or treasonous, he explained that:

“A revolution is forcing the Military Mission to return to France. Alone I stay, alone I wish to continue, under new conditions: the results obtained by the Mission, together with the Party of the North, which is the party favorable to France in Japan. Soon a reaction will take place, and the Daimyos of the North have offered me to be its soul. I have accepted, because with the help of one thousand Japanese officers and non-commissioned officers, our students, I can direct the 50,000 men of the confederation.

The Fall Of The Samurai

In Edo, the imperial forces were victorious again largely in part to Tokugawa Shogun Yoshinobu’s decision to submit to the Emperor. He surrendered the city and only small bands of shogunate forces continued to fight back.

Hakodate Port In 1930

Wikimedia Commons The port of Hakodate in ca. 1930. The Battle of Hakodate saw 7,000 Imperial troops fight 3,000 shogun warriors in 1869.

Despite this, the commander of the shogunate’s navy, Enomoto Takeaki, refused to surrender and headed north in hopes to rally the Aizu clan’s samurai.

They became the core of the so-called Northern Coalition of feudal lords who joined the remaining Tokugawa leaders in their refusal to submit to the Emperor.

The Coalition continued to fight bravely against imperial forces in Northern Japan. Unfortunately, they simply didn’t have enough modern weaponry to stand a chance against the Emperor’s modernized troops. They were defeated by November 1868.

Around this time, Brunet and Enomoto fled north to the island of Hokkaido. Here, the remaining Tokugawa leaders established the Ezo Republic that continued their struggle against the Japanese imperial state.

By this point, it seemed as though Brunet had chosen the losing side, but surrender was not an option.

The last major battle of the Boshin War happened at the Hokkaido port city of Hakodate. In this battle that spanned half a year from December 1868 to June 1869, 7,000 Imperial troops battled against 3,000 Tokugawa rebels.

French And Japanese Military Leaders

Wikimedia Commons French military advisors and their Japanese allies in Hokkaido. Back: Cazeneuve, Marlin, Fukushima Tokinosuke, Fortant. Front: Hosoya Yasutaro, Jules Brunet, Matsudaira Taro (vice-president of the Ezo Republic), and Tajima Kintaro.

Jules Brunet and his men did their best, but the odds were not in their favor, largely due to the technological superiority of the imperial forces.

Jules Brunet Escapes Japan

As a high-profile combatant of the losing side, Brunet was now a wanted man in Japan.

Fortunately, the French warship Coëtlogon evacuated him from Hokkaido just in time. He was then ferried to Saigon — at the time controlled by the French — and returned back to France.

Although the Japanese government demanded Brunet receive punishment for his support of the shogunate in the war, the French government did not budge because his story won the public’s support.

Instead, he was reinstated to the French Army after six months and participated in the Franco-Prussian War of 1870-1871, during which he was taken prisoner during the Siege of Metz.

Later on, he continued to play a major role in the French military, participating in the suppression of the Paris Commune in 1871.

Jules Brunet Chief Of Staff Photo

Wikimedia CommonsJules Brunet had a long, successful military career after his time in Japan. He’s seen here (hat in hand) as Chief of Staff. Oct. 1, 1898.

Meanwhile, his former friend Enomoto Takeaki was pardoned and rose to the rank of vice-admiral in the Imperial Japanese Navy, using his influence to get the Japanese government to not only forgive Brunet but award him a number of medals, including the prestigious Order of the Rising Sun.

Over the next 17 years, Jules Brunet himself was promoted several times. From officer to general, to Chief of Staff, he had a thoroughly successful military career until his death in 1911. But he would be most remembered as one of the key inspirations for the 2003 film The Last Samurai.

Brunet’s daring, adventurous actions in Japan were one of the main inspirations for the 2003 film The Last Samurai.

In this film, Tom Cruise plays American Army officer Nathan Algren, who arrives in Japan to help train Meiji government troops in modern weaponry but becomes embroiled in a war between the samurai and the Emperor’s modern forces.

There are many parallels between the story of Algren and Brunet.

Both were Western military officers who trained Japanese troops in the use of modern weapons and ended up supporting a rebellious group of samurai who still used mainly traditional weapons and tactics. Both also ended up being on the losing side.

But there are many differences as well. Unlike Brunet, Algren was training the imperial government troops and joins the samurai only after he becomes their hostage.

Further, in the film, the samurai are sorely overmatched against the Imperials in regards to equipment. In the true story of The Last Samurai, however, the samurai rebels did actually have some western garb and weaponry thanks to the Westerners like Brunet who had been paid to train them.

Meanwhile, the storyline in the film is based on a slightly later period in 1877 once the emperor was restored in Japan following the fall of the shogunate. This period was called the Meiji Restoration and it was the same year as the last major samurai rebellion against Japan’s imperial government.

Last Battle Of Samurai Rebellion

Wikimedia Commons In the true story of The Last Samurai, this final battle which is depicted in the film and shows Katsumoto/Takamori’s death, did actually happen. But it happened years after Brunet left Japan.

This rebellion was organized by the samurai leader Saigo Takamori, who served as the inspiration for The Last Samurai‘s Katsumoto, played by Ken Watanabe. In the true story of The Last Samurai, Watanabe’s character who resembles Takamori leads a great and final samurai rebellion called the final battle of Shiroyama. In the film, Watanabe’s character Katsumoto falls and in reality, so did Takamori.

This battle, however, came in 1877, years after Brunet had already left Japan.

More importantly, the film paints the samurai rebels as the righteous and honorable keepers of an ancient tradition, while the Emperor’s supporters are shown as evil capitalists who only care about money.

As we know in reality, the real story of Japan’s struggle between modernity and tradition was far less black and white, with injustices and mistakes on both sides.

The Real Motivations Of The Samurai

According to history professor Cathy Schultz, “Many samurai fought Meiji modernization not for altruistic reasons but because it challenged their status as the privileged warrior caste…The film also misses the historical reality that many Meiji policy advisors were former samurai, who had voluntarily given up their traditional privileges to follow a course they believed would strengthen Japan.”

You can read more here: https://allthatsinteresting.com/last-samurai-true-story-jules-brunet

Getting Inked? The history of Tattoos

In 1961, it officially became illegal to give someone a tattoo in New York City. But Thom deVita didn’t let this new restriction deter him from inking people. This ban existed till 1997.

What is the earliest evidence of tattoos?

In terms of tattoos on actual bodies, the earliest known examples were for a long time Egyptian and were present on several female mummies dated to c. 2000 B.C. But following the more recent discovery of the Iceman from the area of the Italian-Austrian border in 1991 and his tattoo patterns, this date has been pushed back a further thousand years when he was carbon-dated at around 5,200 years old.

Can you describe the tattoos on the Iceman and their significance?

Following discussions with my colleague Professor Don Brothwell of the University of York, one of the specialists who examined him, the distribution of the tattooed dots and small crosses on his lower spine and right knee and ankle joints correspond to areas of strain-induced degeneration, with the suggestion that they may have been applied to alleviate joint pain and were therefore essentially therapeutic. This would also explain their somewhat ‘random’ distribution in areas of the body which would not have been that easy to display had they been applied as a form of status marker.

What is the evidence that ancient Egyptians had tattoos?

There’s certainly evidence that women had tattoos on their bodies and limbs from figurines c. 4000-3500 B.C. to occasional female figures represented in tomb scenes c. 1200 B.C. and in figurine form c. 1300 B.C., all with tattoos on their thighs. Also small bronze implements identified as tattooing tools were discovered at the town site of Gurob in northern Egypt and dated to c. 1450 B.C. And then, of course, there are the mummies with tattoos, from the three women already mentioned and dated to c. 2000 B.C. to several later examples of female mummies with these forms of permanent marks found in Greco-Roman burials at Akhmim.

What function did these tattoos serve? Who got them and why?

Because this seemed to be an exclusively female practice in ancient Egypt, mummies found with tattoos were usually dismissed by the (male) excavators who seemed to assume the women were of “dubious status,” described in some cases as “dancing girls.” The female mummies had nevertheless been buried at Deir el-Bahari (opposite modern Luxor) in an area associated with royal and elite burials, and we know that at least one of the women described as “probably a royal concubine” was actually a high-status priestess named Amunet, as revealed by her funerary inscriptions.

And although it has long been assumed that such tattoos were the mark of prostitutes or were meant to protect the women against sexually transmitted diseases, I personally believe that the tattooing of ancient Egyptian women had a therapeutic role and functioned as a permanent form of amulet during the very difficult time of pregnancy and birth. This is supported by the pattern of distribution, largely around the abdomen, on top of the thighs and the breasts, and would also explain the specific types of designs, in particular the net-like distribution of dots applied over the abdomen. During pregnancy, this specific pattern would expand in a protective fashion in the same way bead nets were placed over wrapped mummies to protect them and “keep everything in.” The placing of small figures of the household deity Bes at the tops of their thighs would again suggest the use of tattoos as a means of safeguarding the actual birth, since Bes was the protector of women in labor, and his position at the tops of the thighs a suitable location. This would ultimately explain tattoos as a purely female custom.

Who made the tattoos?

Although we have no explicit written evidence in the case of ancient Egypt, it may well be that the older women of a community would create the tattoos for the younger women, as happened in 19th-century Egypt and happens in some parts of the world today.

What instruments did they use?

It is possible that an implement best described as a sharp point set in a wooden handle, dated to c. 3000 B.C. and discovered by archaeologist W.M.F. Petrie at the site of Abydos may have been used to create tattoos. Petrie also found the aforementioned set of small bronze instruments c. 1450 B.C.—resembling wide, flattened needles—at the ancient town site of Gurob. If tied together in a bunch, they would provide repeated patterns of multiple dots.

These instruments are also remarkably similar to much later tattooing implements used in 19th-century Egypt. The English writer William Lane (1801-1876) observed, “the operation is performed with several needles (generally seven) tied together: with these the skin is pricked in a desired pattern: some smoke black (of wood or oil), mixed with milk from the breast of a woman, is then rubbed in…. It is generally performed at the age of about 5 or 6 years, and by gipsy-women.”

What did these tattoos look like?

Most examples on mummies are largely dotted patterns of lines and diamond patterns, while figurines sometimes feature more naturalistic images. The tattoos occasionally found in tomb scenes and on small female figurines which form part of cosmetic items also have small figures of the dwarf god Bes on the thigh area.

What were they made of? How many colors were used?

Usually a dark or black pigment such as soot was introduced into the pricked skin. It seems that brighter colors were largely used in other ancient cultures, such as the Inuit who are believed to have used a yellow color along with the more usual darker pigments.

What has surprised you the most about ancient Egyptian tattooing?

That it appears to have been restricted to women during the purely dynastic period, i.e. pre-332 B.C. Also the way in which some of the designs can be seen to be very well placed, once it is accepted they were used as a means of safeguarding women during pregnancy and birth.

Can you describe the tattoos used in other ancient cultures and how they differ?

Among the numerous ancient cultures who appear to have used tattooing as a permanent form of body adornment, the Nubians to the south of Egypt are known to have used tattoos. The mummified remains of women of the indigenous C-group culture found in cemeteries near Kubban c. 2000-15000 B.C. were found to have blue tattoos, which in at least one case featured the same arrangement of dots across the abdomen noted on the aforementioned female mummies from Deir el-Bahari. The ancient Egyptians also represented the male leaders of the Libyan neighbors c. 1300-1100 B.C. with clear, rather geometrical tattoo marks on their arms and legs and portrayed them in Egyptian tomb, temple and palace scenes.

The Scythian Pazyryk of the Altai Mountain region were another ancient culture which employed tattoos. In 1948, the 2,400 year old body of a Scythian male was discovered preserved in ice in Siberia, his limbs and torso covered in ornate tattoos of mythical animals. Then, in 1993, a woman with tattoos, again of mythical creatures on her shoulders, wrists and thumb and of similar date, was found in a tomb in Altai. The practice is also confirmed by the Greek writer Herodotus c. 450 B.C., who stated that amongst the Scythians and Thracians “tattoos were a mark of nobility, and not to have them was testimony of low birth.”

Accounts of the ancient Britons likewise suggest they too were tattooed as a mark of high status, and with “divers shapes of beasts” tattooed on their bodies, the Romans named one northern tribe “Picti,” literally “the painted people.”

Yet amongst the Greeks and Romans, the use of tattoos or “stigmata” as they were then called, seems to have been largely used as a means to mark someone as “belonging” either to a religious sect or to an owner in the case of slaves or even as a punitive measure to mark them as criminals. It is therefore quite intriguing that during Ptolemaic times when a dynasty of Macedonian Greek monarchs ruled Egypt, the pharaoh himself, Ptolemy IV (221-205 B.C.), was said to have been tattooed with ivy leaves to symbolize his devotion to Dionysus, Greek god of wine and the patron deity of the royal house at that time. The fashion was also adopted by Roman soldiers and spread across the Roman Empire until the emergence of Christianity, when tattoos were felt to “disfigure that made in God’s image” and so were banned by the Emperor Constantine (A.D. 306-373).

We have also examined tattoos on mummified remains of some of the ancient pre-Columbian cultures of Peru and Chile, which often replicate the same highly ornate images of stylized animals and a wide variety of symbols found in their textile and pottery designs. One stunning female figurine of the Naszca culture has what appears to be a huge tattoo right around her lower torso, stretching across her abdomen and extending down to her genitalia and, presumably, once again alluding to the regions associated with birth. Then on the mummified remains which have survived, the tattoos were noted on torsos, limbs, hands, the fingers and thumbs, and sometimes facial tattooing was practiced.

With extensive facial and body tattooing used among Native Americans, such as the Cree, the mummified bodies of a group of six Greenland Inuit women c. A.D. 1475 also revealed evidence for facial tattooing. Infrared examination revealed that five of the women had been tattooed in a line extending over the eyebrows, along the cheeks and in some cases with a series of lines on the chin. Another tattooed female mummy, dated 1,000 years earlier, was also found on St. Lawrence Island in the Bering Sea, her tattoos of dots, lines and hearts confined to the arms and hands.

Evidence for tattooing is also found amongst some of the ancient mummies found in China’s Taklamakan Desert c. 1200 B.C., although during the later Han Dynasty (202 B.C.-A.D. 220), it seems that only criminals were tattooed.

Japanese men began adorning their bodies with elaborate tattoos in the late A.D. 3rd century.

The elaborate tattoos of the Polynesian cultures are thought to have developed over millennia, featuring highly elaborate geometric designs, which in many cases can cover the whole body. Following James Cook’s British expedition to Tahiti in 1769, the islanders’ term “tatatau” or “tattau,” meaning to hit or strike, gave the west our modern term “tattoo.” The marks then became fashionable among Europeans, particularly so in the case of men such as sailors and coal-miners, with both professions which carried serious risks and presumably explaining the almost amulet-like use of anchors or miner’s lamp tattoos on the men’s forearms.

What about modern tattoos outside of the western world?

Modern Japanese tattoos are real works of art, with many modern practioners, while the highly skilled tattooists of Samoa continue to create their art as it was carried out in ancient times, prior to the invention of modern tattooing equipment. Various cultures throughout Africa also employ tattoos, including the fine dots on the faces of Berber women in Algeria, the elaborate facial tattoos of Wodabe men in Niger and the small crosses on the inner forearms which mark Egypt’s Christian Copts.

What do Maori facial designs represent?

In the Maori culture of New Zealand, the head was considered the most important part of the body, with the face embellished by incredibly elaborate tattoos or ‘moko,’ which were regarded as marks of high status. Each tattoo design was unique to that individual and since it conveyed specific information about their status, rank, ancestry and abilities, it has accurately been described as a form of id card or passport, a kind of aesthetic bar code for the face. After sharp bone chisels were used to cut the designs into the skin, a soot-based pigment would be tapped into the open wounds, which then healed over to seal in the design. With the tattoos of warriors given at various stages in their lives as a kind of rite of passage, the decorations were regarded as enhancing their features and making them more attractive to the opposite sex.

Although Maori women were also tattooed on their faces, the markings tended to be concentrated around the nose and lips. Although Christian missionaries tried to stop the procedure, the women maintained that tattoos around their mouths and chins prevented the skin becoming wrinkled and kept them young; the practice was apparently continued as recently as the 1970s.

Why do you think so many cultures have marked the human body and did their practices influence one another?

In many cases, it seems to have sprung up independently as a permanent way to place protective or therapeutic symbols upon the body, then as a means of marking people out into appropriate social, political or religious groups, or simply as a form of self-expression or fashion statement.

Yet, as in so many other areas of adornment, there was of course cross-cultural influences, such as those which existed between the Egyptians and Nubians, the Thracians and Greeks and the many cultures encountered by Roman soldiers during the expansion of the Roman Empire in the final centuries B.C. and the first centuries A.D. And, certainly, Polynesian culture is thought to have influenced Maori tattoos.

As reports and images from European explorers’ travels in Polynesia reached Europe, the modern fascination with tattoos began to take hold. Although the ancient peoples of Europe had practiced some forms of tattooing, it had disappeared long before the mid-1700s. Explorers returned home with tattooed Polynesians to exhibit at world fairs, in lecture halls and in dime museums, to demonstrate the height of European civilization compared to the “primitive natives” of Polynesia. But the sailors on their ships also returned home with their own tattoos.

Native practitioners found an eager clientele among sailors and others visitors to Polynesia. Colonial ideology dictated that the tattoos of the Polynesians were a mark of their primitiveness. The mortification of their skin and the ritual of spilling blood ran contrary to the values and beliefs of European missionaries, who largely condemned tattoos. Although many forms of traditional Polynesian tattoo declined sharply after the arrival of Europeans, the art form, unbound from tradition, flourished on the fringes of European society.

In the United States, technological advances in machinery, design and color led to a unique, all-American, mass-produced form of tattoo. Martin Hildebrandt set up a permanent tattoo shop in New York City in 1846 and began a tradition by tattooing sailors and military servicemen from both sides of the Civil War. In England, youthful King Edward VII started a tattoo fad among the aristocracy when he was tattooed before ascending to the throne. Both these trends mirror the cultural beliefs that inspired Polynesian tattoos: to show loyalty and devotion, to commemorate a great feat in battle, or simply to beautify the body with a distinctive work of art.

The World War II era of the 1940s was considered the Golden Age of tattoo due to the patriotic mood and the preponderance of men in uniform. But would-be sailors with tattoos of naked women weren’t allowed into the navy and tattoo artists clothed many of them with nurses’ dresses, Native-American costumes or the like during the war. By the 1950s, tattooing had an established place in Western culture, but was generally viewed with distain by the higher reaches of society. Back alley and boardwalk tattoo parlors continued to do brisk business with sailors and soldiers. But they often refused to tattoo women unless they were twenty-one, married and accompanied by their spouse, to spare tattoo artists the wrath of a father, boyfriend or unwitting husband.

Today tattooing is recognized as a legitimate art form.
Today tattooing is recognized as a legitimate art form.

Today, tattooing is recognized as a legitimate art form that attracts people of all walks of life and both sexes. Each individual has his or her own reasons for getting a tattoo; to mark themselves as a member a group, to honor loved ones, to express an image of themselves to others. With the greater acceptance of tattoos in the West, many tattoo artists in Polynesia are incorporating ancient symbols and patterns into modern designs. Others are using the technical advances in tattooing to make traditional tattooing safer and more accessible to Polynesians who want to identify themselves with their culture’s past.

Humans have marked their bodies with tattoos for thousands of years. These permanent designs—sometimes plain, sometimes elaborate, always personal—have served as amulets, status symbols, declarations of love, signs of religious beliefs, adornments and even forms of punishment. Joann Fletcher, research fellow in the department of archaeology at the University of York in Britain, describes the history of tattoos and their cultural significance to people around the world, from the famous ” Iceman,” a 5,200-year-old frozen mummy, to today’s Maori.

Source:

  1. https://www.smithsonianmag.com/history/tattoos-144038580/
  2. https://www.pbs.org/skinstories/history/beyond.html
  3. https://www.smithsonianmag.com/travel/tattoos-were-illegal-new-york-city-exhibition-180962232/

 

What is a Hamburger?

The hamburger is one of the world’s most popular foods, with nearly 50 billion served up annually in the United States alone. Although the humble beef-patty-on-a-bun is technically not much more than 100 years old, it’s part of a far greater lineage, linking American businessmen, World War II soldiers, German political refugees, medieval traders and Neolithic farmers.

The groundwork for the ground-beef sandwich was laid with the domestication of cattle (in Mesopotamia around 10,000 years ago), and with the growth of Hamburg, Germany, as an independent trading city in the 12th century, where beef delicacies were popular.

1121 – 1209 – Genghis Khan (1162-1227), crowned the “emperor of all emperors,” and his army of fierce Mongol horsemen, known as the “Golden Horde,” conquered two thirds of the then known world. The Mongols were a fast-moving, cavalry-based army that rode small sturdy ponies. They stayed in their saddles for long period of time, sometimes days without ever dismounting. They had little opportunity to stop and build a fire for their meal.

The entire village would follow behind the army on great wheeled carts they called “yurts,” leading huge herds of sheep, goats, oxen, and horses. As the army needed food that could be carried on their mounts and eaten easily with one hand while they rode, ground meat was the perfect choice. They would use scrapings of lamb or mutton which were formed into flat patties. They softened the meat by placing them under the saddles of their horses while riding into battle. When it was time to eat, the meat would be eaten raw, having been tenderized by the saddle and the back of the horse.

1238 – When Genghis Khan’s grandson, Khubilai Khan (1215-1294), invaded Moscow, they naturally brought their unique dietary ground meat with them. The Russians adopted it into their own cuisine with the name “Steak Tartare,” (Tartars being their name for the Mongols). Over many years, Russian chefs adapted and developed this dish and refining it with chopped onions and raw eggs.

5th Century
Beginning in the fifteenth century, minced beef was a valued delicacy throughout Europe. Hashed beef was made into sausage in several different regions of Europe.

1600s – Ships from the German port of Hamburg, Germany began calling on Russian port. During this period the Russian steak tartare was brought back to Germany and called “tartare steak.”

18th and 19th Centuries

Jump ahead to 1848, when political revolutions shook the 39 states of the German Confederation, spurring an increase in German immigration to the United States. With German people came German food: beer gardens flourished in American cities, while butchers offered a panoply of traditional meat preparations. Because Hamburg was known as an exporter of high-quality beef, restaurants began offering a “Hamburg-style” chopped steak.

Hamburg Steak:
In the late eighteenth century, the largest ports in Europe were in Germany. Sailors who had visited the ports of Hamburg, Germany and New York, brought this food and term “Hamburg Steak” into popular usage. To attract German sailors, eating stands along the New York city harbor offered “steak cooked in the Hamburg style.”

Immigrants to the United States from German-speaking countries brought with them some of their favorite foods. One of them was Hamburg Steak. The Germans simply flavored shredded low-grade beef with regional spices, and both cooked and raw it became a standard meal among the poorer classes. In the seaport town of Hamburg, it acquired the name Hamburg steak. Today, this hamburger patty is no longer called Hamburg Steak in Germany but rather “Frikadelle,” “Frikandelle” or “Bulette,” orginally Italian and French words.

According to Theodora Fitzgibbon in her book The Food of the Western World – An Encyclopedia of food from North American and Europe:

The originated on the German Hamburg-Amerika line boats, which brought emigrants to America in the 1850s. There was at that time a famous Hamburg beef which was salted and sometimes slightly smoked, and therefore ideal for keeping on a long sea voyage. As it was hard, it was minced and sometimes stretched with soaked breadcrumbs and chopped onion. It was popular with the Jewish emigrants, who continued to make Hamburg steaks, as the patties were then called, with fresh meat when they settled in the U.S.

The cookbooks:

1758 – By the mid-18th century, German immigrants also begin arriving in England. One recipe, titled “Hamburgh Sausage,” appeared in Hannah Glasse’s 1758 English cookbook called The Art of Cookery Made Plain and Easy. It consisted of chopped beef, suet, and spices. The author recommended that this sausage be served with toasted bread. Hannah Glasse’s cookbook was also very popular in Colonial America, although it was not published in the United States until 1805. This American edition also contained the “Hamburgh Sausage” recipe with slight revisions.

1844 – The original Boston Cooking School Cook Book, by Mrs. D.A. Lincoln (Mary Bailey), 1844 had a recipe for Broiled Meat Cakes and also Hamburgh Steak:

Broiled Meat Cakes – Chop lean, raw beef quite fine. Season with salt, pepper, and a little chopped onion, or onion juice. Make it into small flat cakes, and broil on a well-greased gridiron or on a hot frying pan. Serve very hot with butter or Maitre de’ Hotel sauce.

Hamburgh Steak – Pound a slice of round steak enough to break the fibre. Fry two or three onions, minced fine, in butter until slightly browned. Spread the onions over the meat, fold the ends of the meat together, and pound again, to keep the onions in the middle. Broil two or three minutes. Spread with butter, salt, and pepper.

1894 – In the 1894 edition of the book The Epicurean: A Complete Treatise of Analytical & Practical Studies, by Charles Ranhofer (1836-1899), chef at the famous Delmonico’s restaurant in New York, there is a listing for Beef Steak Hamburg Style. The dish is also listed in French as Bifteck Hambourgeoise. What made his version unique was that the recipe called for the ground beef to be mixed with kidney and bone marrow:

One pound of tenderloin beef free of sinews and fat; chop it up on a chopping block with four ounces of beef kidney suet, free of nerves and skin or else the same quantity of marrow; add one ounce of chopped onions fried in butter without attaining color; season all with salt, pepper and nutmeg, and divide the preparation into balls, each one weighing four ounces; flatten them down, roll them in bread-crumbs and fry them in a sautpan in butter. When of a fine color on both sides, dish them up pouring a good thickened gravy . . . over.”

1906 – Upton Sinclair (1878-1968), American novelist, wrote in his book called The Jungle, which told of the horrors of Chicago meat packing plants. This book caused much distrust in the United States regarding chopped meat. Sinclair was surprised that the public missed the main point of his impressionistic fiction and took it to be an indictment of unhygienic conditions of the meat packing industry. This caused people to not trust chopped meat for several years.

Invention of Meat Choppers:
Referring to ground beef as hamburger dates to the invention of the mechanical meat choppers during the 1800s. It was not until the early nineteenth century that wood, tin, and pewter cylinders with wooden plunger pushers became common. Steve Church of Ridgecrest, California uncovered some long forgotten U. S. patents on Meat Cutters:
In mid-19th-century America, preparations of raw beef that had been chopped, chipped, ground or scraped were a common prescription for digestive issues. After a New York doctor, James H. Salisbury suggested in 1867 that cooked beef patties might be just as healthy, cooks and physicians alike quickly adopted the “Salisbury Steak”. Around the same time, the first popular meat grinders for home use became widely available (Salisbury endorsed one called the American Chopper) setting the stage for an explosion of readily available ground beef.

The hamburger seems to have made its jump from plate to bun in the last decades of the 19th century, though the site of this transformation is highly contested. Lunch wagons, fair stands and roadside restaurants in Wisconsin, Connecticut, Ohio, New York and Texas have all been put forward as possible sites of the hamburger’s birth. Whatever its genesis, the burger-on-a-bun found its first wide audience at the 1904 St. Louis World’s Fair, which also introduced millions of Americans to new foods ranging from waffle ice cream cones and cotton candy to peanut butter and iced tea.

Two years later, though, disaster struck in the form of Upton Sinclair’s journalistic novel The Jungle, which detailed the unsavory side of the American meatpacking industry. Industrial ground beef was easy to adulterate with fillers, preservatives and meat scraps, and the hamburger became a prime suspect.

The history of the American burger:

The hamburger might have remained on the seamier margins of American cuisine were it not for the vision of Edgar “Billy” Ingram and Walter Anderson, who opened their first White Castle restaurant in Kansas in 1921. Sheathed inside and out in gleaming porcelain and stainless steel, White Castle countered hamburger meat’s low reputation by becoming bastions of cleanliness, health and hygiene (Ingram even commissioned a medical school study to show the health benefits of hamburgers). His system, which included on-premise meat grinding, worked well and was the inspiration for other national hamburger chains founded in the boom years after World War II: McDonald’s and In-N-Out Burger (both founded in 1948), Burger King (1954) and Wendy’s (1969).

Only one of the claimants below served their hamburgers on a bun – Oscar Weber Bilby in 1891. The rest served them as sandwiches between two slices of bread.

Most of the following stories on the history of the hamburgers were told after the fact and are based on the recollections of family members. For many people, which story or legend you believe probably depends on where you are from. You be the judge! The claims are as follows:

 

1885 – Charlie Nagreen of Seymour, Wisconsin – At the age of 15, he sold hamburgers from his ox-drawn food stand at the Outagamie County Fair. He went to the Outagamie County Fair and set up a stand selling meatballs. Business wasn’t good and he quickly realized that it was because meatballs were too difficult to eat while strolling around the fair. In a flash of innovation, he flattened the meatballs, placed them between two slices of bread and called his new creation a hamburger. He was known to many as “Hamburger Charlie.” He returned to sell hamburgers at the fair every year until his death in 1951, and he would entertain people with guitar and mouth organ and his jingle:

Hamburgers, hamburgers, hamburgers hot; onions in the middle, pickle on top. Makes your lips go flippity flop.

The town of Seymour, Wisconsin is so certain about this claim that they even have a Hamburger Hall of Fame that they built as a tribute to Charlie Nagreen and the legacy he left behind. The town claims to be “Home of the Hamburger” and holds an annual Burger Festival on the first Saturday of August each year. Events include a ketchup slide, bun toss, and hamburger-eating contest, as well as the “world’s largest hamburger parade.”

 

On May 9, 2007, members of the Wisconsin legislature declared Seymour, Wisconsin, as the home of the hamburger:

Whereas, Seymour, Wisconsin, is the right home of the hamburger; and,
Whereas, other accounts of the origination of the hamburger trace back only so far as the 1880s, while Seymour’s claim can be traced to 1885; and,
Whereas, Charles Nagreen, also known as Hamburger Charlie, of Seymour, Wisconsin, began calling ground beef patties in a bun “hamburgers” in 1885; and,
Whereas, Hamburger Charlie first sold his world-famous hamburgers at age 15 at the first Seymour Fair in 1885, and later at the Brown and Outagamie county fairs; and,
Whereas, Hamburger Charlie employed as many as eight people at his famous hamburger tent, selling 150 pounds of hamburgers on some days; and,
Whereas, the hamburger has since become an American classic, enjoyed by families and backyard grills alike; now, therefore, be it
Resolved by the assembly, the senate concurring, That the members of the Wisconsin legislature declare Seymour, Wisconsin, the Original Home of the Hamburger.

 

1885 – The family of Frank and Charles Menches from Akron, Ohio, claim the brothers invented the hamburger while traveling in a 100-man traveling concession circuit at events (fairs, race meetings, and farmers’ picnics) in the Midwest in the early 1880s. During a stop at the Erie County Fair in Hamburg, New York, the brothers ran out of pork for their hot sausage patty sandwiches. Because this happened on a particularly hot day, the local butchers stop slaughtering pigs. The butcher suggested that they substitute beef for the pork. The brothers ground up the beef, mixed it with some brown sugar, coffee, and other spices and served it as a sandwich between two pieces of bread. They called this sandwich the “hamburger” after Hamburg, New York where the fair was being held. According to family legend, Frank didn’t really know what to call it, so he looked up and saw the banner for the Hamburg fair and said, “This is the hamburger.” In Frank’s 1951 obituary in The Los Angeles Times, he is acknowledged him as the ”inventor” of the hamburger.

Hamburg held its first Burgerfest in 1985 to mark the 100th anniversary of the birth of the hamburger after organizers discovered a history book detailing the burger’s origins.

 

In 1991, Menches and his siblings stumbled across the original recipe among some old papers their great-grandmother left behind. After selling their burgers at county fairs for a few years, the family opened up the Menches Bros. Restaurant in Akron, Ohio. The Menches family is still in the restaurant business and still serving hamburgers in Ohio.

 

On May 28, 2005, the town of Akron, Ohio hosted the First Annual National Hamburger Festival to celebrate the 120th Anniversary of the invention of the hamburger. The festival will be dedicated to Frank and Charles Menches. That is how sure the city of Akron is on the Menches’ family claim on the contested contention that two residents invented the hamburger. The Ohio legislature is also considering making hamburgers the state food.

 

 

1891 – The family of Oscar Weber Bilby claim the first-known hamburger on a bun was served on Grandpa Oscar’s farm just west of Tulsa, Oklahoma in 1891. The family says that Grandpa Oscar was the first to add the bun, but they concede that hamburger sandwiches made with bread may predate Grandpa Oscar’s famous hamburger.

Michael Wallis, travel writer and reporter for Oklahoma Today magazine, did an extensive search in 1995 for the true origins of the hamburger and determined that Oscar Weber Bilby himself was the creator of the hamburger as we know it. According to Wallis’s 1995 article, Welcome To Hamburger Heaven, in an interview with Harold Bilby:

The story has been passed down through the generations like a family Bible. “Grandpa himself told me that it was in June of 1891 when he took up a chunk of iron and made himself a big ol’ grill,” explains Harold. “Then the next month on the Fourth of July he built a hickory wood fire underneath that grill, and when those coals were glowing hot, he took some ground Angus meat and fired up a big batch of hamburgers. When they were cooked all good and juicy, he put them on my Grandma Fanny’s homemade yeast buns – the best buns in all the world, made from her own secret recipe. He served those burgers on buns to neighbors and friends under a grove of pecan trees . . . They couldn’t get enough, so Grandpa hosted another big feed. He did that every Fourth of July, and sometimes as many as 125 people showed up.”

 

Simple math supports Harold Bilby’s contention that if his Grandpa served burgers on Grandma Fanny’s buns in 1891, then the Bilbys eclipsed the St. Louis World’s Fair vendors by at least thirteen years. That would make Oklahoma the cradle of the hamburger. “There’s not even the trace of a doubt in my mind,” say Harold. “My grandpa invented the hamburger on a bun right here in what became Oklahoma, and if anybody wants to say different, then let them prove otherwise.”

 

In 1933, Oscar and his son, Leo, opened the family’s first hamburger stand in Tulsa, Oklahoma, called Weber’s Superior Root Beer Stand. They still use the same grill used in 1891, with one minor variation, the wood stove has been converted to natural gas. In a letter to me, Linda Stradley, dated July 31, 2004, Rick Bilby states the following:

My great-grandfather, Oscar Weber Bilby invented the hamburger on July 4, 1891. He served ground beef patties that were seared to perfection on a open flame from a hand-made grill. My great-grandmother Fanny made her own home-made yeast hamburger buns to put around the ground beef patties. They served this new sandwich along with their tasty home-made rood beer which was also carbonated with yeast. People would come for all over the county on July 4th each year to consume and enjoy these treats. To this day we still cook our hamburger on grandpa’s grill, which is now fired by natural gas.

 

On April 13, 1995, Governor Frank Keating of Oklahoma proclaimed that the real birthplace of the hamburger on the bun, was created and consumed in Tulsa in 1891. The State of Oklahoma Proclamation states:

Whereas, scurrilous rumors have credited Athens, Texas, as the birthplace of the hamburger, claiming for that region south of the Red River commonly known as Baja Oklahoma a fame and renown which are hardly its due; and
Whereas, the Legislature of Baja Oklahoma has gone so far as to declare April 3, 1995, to be Athens Day at the State Capitol, largely on the strength of this bogus claim, and
Whereas, while the residents, the scenery, the hospitality and the food found in Athens are no doubt superior to those in virtually any other locale, they must be recognized. In the words of Mark Twain, as “the lightning bug is to the lightning” when compared with the Great City of Tulsa in the Great State of Oklahoma; and
Whereas, although someone in Athens, in the 1860’s, may have place cooked ground beef between two slices of bread, this minor accomplishment can in no way be regarded comes on a bun accompanied by such delight as pickles, onions, lettuce, tomato, cheese and, in some cases, special sauce; and
Whereas, the first true hamburger on a bun, as meticulous research shows, was created and consumed in Tulsa in 1891 and was only copied for resale at the St. Louis World’s Fair a full 13 years after that momentous and history-making occasion:
Now Therefore, I, Frank Keating, Governor of the State of Oklahoma, do hereby proclaim April 12, 1995, as THE REAL BIRTHPLACE OF THE HAMBURGER IN TULSA DAY.

 

1900 – Louis Lassen of New Haven, Connecticut is also recorded as serving the first “burger” at his New Haven luncheonette called Louis’ Lunch Wagon. Louis ran a small lunch wagon selling steak sandwiches to local factory workers. A frugal business man, he did not like to waste the excess beef from his daily lunch rush. It is said that he ground up some scraps of beef and served it as a sandwich, the sandwich was sold between pieces of toasted bread, to a customer who was in a hurry and wanted to eat on the run.

 

Kenneth Lassen, Louis’ grandson, was quoted in the September 25, 1991 Athens Daily Review as saying;

“We have signed, dated and notarized affidavits saying we served the first hamburger sandwiches in 1900. Other people may have been serving the steak but there’s a big difference between a hamburger steak and a hamburger sandwich.”

 

In the mid-1960s, the New Haven Preservation Trust placed a plaque on the building where Louis’ Lunch is located proclaiming Louis’ Lunch to be the first place the hamburger was sold.

Louis’ Lunch is still selling their hamburgers from a small brick building in New Haven. The sandwich is grilled vertically in antique gas grills and served between pieces of toast rather than a bun, and refuse to provide mustard or ketchup.

 

Library of Congress named Louis’ Lunch a “Connecticut Legacy.” The following is taken from the Congressional Record, 27 July 2000, page E1377:

Honoring Louis’ Lunch on Its 105th Anniversary – Representative Rosa L. DeLauro:
. . . it is with great pleasure that I rise today to celebrate the 105th anniversary of a true New Haven landmark: Louis’ Lunch. Recently the Lassen family celebrated this landmark as well as the 100th anniversary of their claim to fame — the invention and commercial serving of one of America’s favorites, the hamburger . . . The Lassens and the community of New Haven shared unparalleled excitement when the Library of Congress named Louis’ Lunch a “Connecticut Legacy” — nothing could be more true.

 

1901 or 1902 – Bert W. Gary of Clarinda, Iowa, in an article by Paige Carlin for the Omaha World Herald newspaper, takes no credit for having invented it, but he stakes uncompromising claim to being the “daddy” of the hamburger industry. He served his hamburger on a bun:

The hamburger business all started about 1901 or 1902 (The Grays aren’t sure which) when Mr. Gray operated a little cafe on the east side of Clarinda’s Courthouse Square.

Mr. Gray recalled: “There was an old German here named Ail Wall (or Wahl, maybe) and he ran a butcher shop. One day he was stuffing bologna with a little hand machine, and he said to me: ‘Bert, why wouldn’t ground meat make a good sandwich?’”

“I said I’d try it, so I took this ground beef and mixed it with an egg batter and fried it. I couldn’t bet anybody to eat it. I quit the egg batter and just took the meat with a little flour to hold it together. The new technique paid off.”

“He almost ran the other cafes out of the sandwich business,” Mrs. Gray put in. “He could make hamburgers so nice and soft and juicy – better than I ever could,” she added.

“This old German, Wall, came over here from Hamburg, and that’s what he said to call it,” Mr. Gray explained. “I sold them for a nickel apiece in those days. That was when the meat was 10 or 12 cents a pound,” he added. “I bought $5 or $6 worth of meat at a time and I got three or four dozen pans of buns from the bakery a day.”

One time the Grays heard a conflicting claim by a man (somewhere in the northern part of the state) that he was the hamburger’s inventor. “I didn’t pay any attention to him,” Mr. Gray snorted. “I’ve got plenty of proof mine was the first,” he said.

so much more to read at https://whatscookingamerica.net/history/hamburgerhistory.htm

Window Tax- aka- Daylight Robbery

It was interesting to learn about the etymology of “Daylight Robbery”- it really prompted me to dig deeper.

When William III was short of money, which he attempted to rectify by the introduction of the much-despised Window Tax. As the name suggests, this was a tax levied on the windows or window-like openings of a property. The details were much amended over time, but the tax was levied originally on all dwellings except cottages. The upper classes, having the largest houses, paid the most. Some wealthy individuals used their ability to pay as a mark of status and demonstrated their wealth by ostentatiously building homes with many windows.

Daylight robbery - Hardwick Hall

What the Cavendish family, who owned Hardwick Hall (built 1590s), thought about it isn’t recorded. On the one hand, they had cause for complaint – the property was famous for its many windows and light and airy interiors, as celebrated in the rhyme: “Hardwick Hall, more glass than wall”. On the other hand, they were extremely rich and well able to pay.

Taxes are rarely popular, but the Window Tax, which was considered to tax the very stuff of life, that is, light and air, was singled out for particular loathing. People went to great pains to avoid paying it and many windows were bricked up for that reason. Many examples of buildings with brick window panels, sometimes with painted-on trompe l’oeil windows, still survive.

Fake windows - avoiding window tax

The sight of such windows is so much part of the English architectural folk memory that the example pictured, of a recently built property in Poundbury, Dorset, appears to have been built with fake bricked-up windows, even through the tax itself is long since abolished.

So, that’s the case for the prosecution: the English were robbed of their daylight by the Window Tax. That’s daylight robbery in anyone’s book, so do we need to look any further for the origin of the phrase? Well, yes we do.

Let’s move to the 20th century for the case for the defence. The phrase isn’t known in print until 1916 in Hobson’s Choice, a comic play by Harold Brighouse. Even there the context doesn’t explicitly link it to unfair overcharging or the like. We have to wait until 1949 for a citation that is clearly related to a purchase, in Daniel Marcus Davin’s Roads from Home:

“I can never afford it, said his sister. It’s daylight robbery.”

So, Daylight robbery aka Window Tax.

https://www.phrases.org.uk/meanings/daylight-robbery.html

What Was the Window Tax?

The ‘Window Tax’ was a tax devised by King William III in the 1690s. It was levied on the windows or openings of a building.

The more windows a building had, the more tax they paid. It was essentially a progressive tax whereby the wealthier member of society paid the most as they tended to have larger houses and more windows on those houses.

Indeed, many rich individuals took paying the tax as a badge of honour. The greater tax they paid meant that they were seen as having more wealth and status. In fact, some houses were built with more windows for that specific purpose.

How the Swiss Ruled Chocolate

I came across this exercpt from a very interseting article. You can read the article here:

The Unfinished Dream Behind Amul’s Foray into the Chocolate Industry (thewire.in)

 

Theobroma Cocoa, food of the gods, had been consumed in Latin America since the Aztec and Mayan times in liquid form, it was the making of the milk chocolate bar that brought it into every person’s reach. Spanish colonisers got chocolate to Europe in 1528 from Mexico and it spread across the continent to reach England by the 1650s. It took another 200 years and an industrial revolution to make the first chocolate bar. J.S. Fry & Sons of Bristol, England made the first solid chocolate bar in 1847 and some 100 miles away in Birmingham, John Cadbury made his eponymous solid chocolate bar, by 1849. It took yet another two and a half decades for milk chocolate to be made, which made chocolate more palatable and pocket friendly. That development took place in Vevey, Switzerland.

Vevey too had become a hub for chocolate factories by the early 1800s. Francois-Louis Cailler started his factory in 1820. Kohler started his factory in 1830. Cailler’s son-in-law Daniel Peter started his factory in 1867, around the same time that his neighbour and friend Henri Nestle started his infant milk food business. Henri Nestle had a hand in the development of milk chocolate in 1875 by Daniel Peter, providing him with condensed milk.

Eventually, all three of them – Cailler and Peter and Kohler – became part of Nestle in 1929. Lindt initially worked at Kohler’s and then set up his chocolate factory in 1879, establishing his own brand. One of Lindt’s initial customers, Jean Tobler, opened his factory in 1899, which eventually launched ‘Toblerone’. Thus, by the turn of the 19th century, the Swiss had taken the lead in milk chocolates, helped in no small measure by a burgeoning dairy industry and the Swiss cow.

Cadbury made milk chocolate only in 1897. Its defining milk chocolate – Cadbury Dairy Milk – came out in 1905. Fry merged with Cadbury in 1919. Elsewhere in Europe, Cacao Barry (France) and Callebaut (Belgium) got into the chocolate business in 1911, while Godiva started in Belgium in 1926.

Across the pond, Milton S. Hershey developed his own formula for milk chocolate and made the Hershey bar in 1900. Frank Mars started his milk chocolate bar in the 1920s and his son, Forrest Sr, started M&M in 1940. Meiji in Japan launched its milk chocolate in 1926.

In the absence of non-disclosure agreements then, because milk chocolates were an innovative product, these food tech startups relied on secrecy and family ties to keep their formulae from being copied. Spying on each other was rampant as portrayed in Roald Dahl’s book Charlie and the Chocolate Factory. Even today Ferrero (started 1946) doesn’t allow cameras or tours in its factory. More than a century later, these brands and companies continue to dominate the $106 billion chocolate market.

Even as the world consumes chocolates worth $106 billion annually, the countries producing cocoa bean get only $8.6 billion – less than 10% of the consumer dollar. In fact, 60% of the worlds cocoa bean is produced in Ghana and Ivory Coast. Farmers growing cocoa beans there struggle for an income of $2/ day and are too poor to eat chocolates that are made from their crops. About 80% of the world’s cocoa, from the top five producing countries, flows to Europe and North America. The inequality in trade is complicated by the presence of middlemen known as trader-grinders. Out of the 4.6 million tonnes of annual cocoa beans production, just three companies – Cargill, Olam and Barry Callebaut – control 60% of the flow. Eight companies control more than 90% of it.

Half a century later, there is a trend of Fairtrade chocolates in the western world. European brands like Divine chocolates, in which a Ghanian farmer’s cooperative Kuapa Kokoo has a 20% stake, represent heart-warming initiatives.

Salary = Salt

A while ago I heard about the history of the word “Salary” being linked to Salt, and so I checked it out-

Well –

Being so valuable, soldiers in the Roman army were sometimes paid with salt instead of money. Their monthly allowance was called “salarium” (“sal” being the Latin word for salt). This Latin root can be recognized in the French word “salaire” — and it eventually made it into the English language as the word “salary.”