The resounding clang of the Nasdaq opening bell on that crisp January morning in 2024 was more than a simple sound; it was a symphony of personal and professional triumph, a crescendo that reverberated through the corridors of my life. Standing on that iconic podium, a wave of emotions washed over me as I reflected upon the intricate tapestry of experiences that had led me to this pivotal moment. It was a journey marked by resilience, adaptability, and an unyielding pursuit of knowledge and innovation, a testament to the transformative power of unconventional paths and the unwavering spirit of the human will.
The Foundation: A Nomadic Upbringing
My story begins with a childhood defined by constant change and upheaval. As the son of a military officer, I became accustomed to a life of frequent relocations, adapting to new environments and cultures with remarkable speed and agility. In the first four decades of my life, I moved an astounding 24 times, traversing the vast expanse of the United States and even venturing to India. While this nomadic lifestyle presented its share of challenges, it also instilled in me a profound sense of adaptability, resourcefulness, and the ability to forge connections with people from all walks of life.
Each move was a fresh start, an opportunity to reinvent myself and embrace new experiences. I lived in bustling metropolises and quaint rural towns, interacted with individuals from diverse socioeconomic backgrounds, and witnessed firsthand the kaleidoscope of human experiences that shaped our world. This exposure broadened my horizons, challenged my preconceived notions, and cultivated a deep appreciation for the richness and complexity of our global community.
My academic journey mirrored the nomadic nature of my upbringing. In pursuit of a bachelor’s degree in physics, a postgraduate diploma in marketing, and an MBA, I attended nine different educational institutions across two continents. This constant shuffling of schools meant that I never truly established roots in a single place, but it also honed my ability to assimilate into new social circles, navigate different educational systems, and adapt to varying teaching styles. I became a chameleon of sorts, blending seamlessly into new environments and building rapport with classmates and teachers, regardless of their backgrounds or personalities.
The Unconventional Path: A Mosaic of Experiences
My career trajectory was equally unconventional, a mosaic of experiences that spanned diverse industries and roles. I delved into the world of marketing consultancy, honing my skills in strategic planning and brand development. I immersed myself in the realm of product management, gaining insights into the intricate dance between customer needs and technological innovation. I ventured into business development, mastering the art of forging partnerships and cultivating mutually beneficial relationships.
Each experience was a stepping stone, a building block in the foundation of my professional identity. I learned to embrace ambiguity, to navigate complex challenges, and to identify opportunities where others saw obstacles. I developed a deep appreciation for the power of collaboration, recognizing that the greatest achievements are often the result of collective effort.
The Entrepreneurial Spirit: A Burning Desire to Innovate
Throughout my career, I was drawn to the allure of entrepreneurship and innovation, inspired by the stories of visionary leaders who had disrupted their industries and left an indelible mark on the world. I yearned to be part of something bigger than myself, to create something that would have a lasting impact on society. This burning desire led me to the world of Special Purpose Acquisition Companies (SPACs), a relatively new and rapidly evolving financial instrument that offered a unique opportunity to combine my entrepreneurial drive with my experience in capital markets.
SPACs, I discovered, were a powerful tool for democratizing access to investment opportunities, empowering innovative companies, and creating value for all stakeholders. They were a blank canvas upon which I could paint my vision of a future where entrepreneurship and social impact could coexist and thrive.
The Zoomcar Merger: A Defining Moment
The culmination of my SPAC journey came with the Zoomcar merger, a landmark transaction that would forever alter the trajectory of my career. Zoomcar, a trailblazing car-sharing platform that had established a dominant presence in emerging markets, resonated deeply with my values and aspirations. Their innovative business model, which leveraged technology to provide affordable and accessible mobility solutions, coupled with their unwavering commitment to social impact, aligned perfectly with my investment thesis.
The path to a successful merger was fraught with challenges. We encountered volatile market conditions, regulatory complexities in multiple jurisdictions, and the inherent difficulties of integrating two distinct corporate cultures. But through it all, our team remained steadfast in its pursuit of a mutually beneficial outcome. We worked tirelessly, leveraging our collective expertise in finance, law, technology, and operations. We engaged in countless hours of negotiation, carefully crafting deal terms that would protect the interests of all stakeholders.
The ringing of the opening bell at Nasdaq was a moment of profound significance, a testament to the power of perseverance, collaboration, and the unwavering belief in a vision. It was a celebration of the entrepreneurial spirit that drives innovation and progress, a reminder that even the most ambitious dreams can be realized through unwavering dedication and a steadfast commitment to one’s values.
The Ripple Effect: Inspiring a New Generation of Leaders
The success of the Zoomcar merger had a ripple effect throughout the SPAC industry, inspiring renewed optimism and fueling a resurgence of interest in this innovative financing vehicle. It served as a powerful example of how SPACs could be used to support companies with a strong commitment to social impact, creating a win-win scenario for investors and society alike.
Theatre is often perceived as merely an art form or a means of entertainment. However, my personal experiences in theatre during high school and college have taught me invaluable lessons that have shaped me into a better professional and leader. The skills I acquired through theatre have been invaluable in my journey as a successful operator and Chief Operating Officer (COO).
The Roman Epic Experience My first encounter with theatre was during high school when I acted in a production of the Roman epic “Caligula.” The experience was unique and eye-opening. The rigorous rehearsal process instilled in me the importance of commitment, discipline, and attention to detail – qualities that would later prove essential in my professional life.
Discovering My True Passion During college, I continued exploring theatre, but after a couple of acting productions, I realized that my true passion lay backstage. The chaos of a production required a semblance of sanity, and a great stage manager had to have everything planned and accounted for, with contingencies in place for unexpected situations.
The Disaster that Taught Me Resilience I vividly remember the first show I stage-managed – it was a disaster! The technical rehearsal saw every possible thing go wrong – the backdrop fell, props were missing, and an irate parent even hurled abuses at me because her child “didn’t get to shine” due to my perceived shortcomings. This experience taught me a valuable lesson in resilience and the importance of preparation.
The Production Book: My Secret Weapon From that low point, I vowed to improve and started meticulously documenting every aspect of the production in my “production book.” This book became synonymous with my identity – people associated me with my book, my utility jacket with a thousand pockets, and my all-black outfit (which eventually became my go-to wardrobe).
The Six Pillars of Stagecraft and Life Through my theatre experiences, I developed a six-pillar approach that has served me well in both stagecraft and corporate life:
1. Understand: Sitting through rehearsals allowed me to grasp the rhythm and cadence of the show, much like understanding the pace of development in the corporate world.
2. Prepare: Ensuring that all necessary props and tools were in place and positioned correctly, just as having the right resources and tools is crucial in a corporate setting.
3. Plan: Meticulously planning the movement of props, actors, and crew members, akin to strategically allocating resources within an organization for optimal efficiency.
4. Practice: Rehearsing scene changes, movements, and team coordination, just as practice and preparation are essential for successful execution in any professional endeavor.
5. Feedback: Seeking input from actors and crew members to identify areas for improvement, mirroring the importance of open communication and feedback loops in a corporate environment.
6. Execute: When the show begins, all the training and practice come into play, seamlessly executing the plan while being prepared for contingencies – a skill that directly translates to managing operations and addressing issues in the corporate world.
The Lasting Impact of Theatre My stint in theatre has had a profound and lasting impact on my personal and professional life. The lessons learned backstage – commitment, discipline, attention to detail, resilience, preparation, planning, practice, feedback, and execution – have made me the best COO I could be across various organizations. Theatre has truly been a transformative experience, equipping me with essential life skills that have been invaluable in my journey as a successful operator and leader.
The 47 Ronin, Warriors of Ako, committed seppuku on this day, March 20.
The story began in 1701, when the Lord of Ako, Asano Nagamori, attacked the Chief of Protocol, Lord Kira within the grounds of Edo Castle, for which he was ordered to commit seppuku. Asano’s lands at Ako (now part of Hyogo Prefecture) were confiscated, and his over 300 samurai were forced to disband.
On the night of Tuesday, January 30, 1703, (14th day of the 12th month by the old Japanese calendar, and the date by which the event is still remembered in Japan) 47 of the former men of Ako stormed the mansion of Kira Yoshinaka, killing the 62-year-old Chief of Protocol.
Having cut off the man’s head, they carried it about 14 km through the streets of Edo to the grave of their former master, Lord Asano, at the Sengaku-ji, a temple in the southern districts of Edo. Then having paid their respects before the grave, they turned themselves in to the authorities.
Although they defied orders prohibiting revenge, they had exemplified the way of the samurai. The Shogunate spent weeks discussing the pros and cons of their actions, before deciding to allow them to commit seppuku rather than be executed.
According to the story, they committed seppuku in the grounds of the Sengaku-ji Temple upon completion of their task.
In fact, the 46 men were separated and billeted out to the homes of various daimyo in Edo at the time while the Bakufu decided upon their fate.
Oishi Kuranosuke, leader of the 47 Ronin, and 16 of the Ronin were sent to Lord of Higo, (Kumamoto) Hosokawa Tatsunari’s mansion, located in modern day Minato-ku of Tokyo. A monument and the site of the mass seppuku has been preserved.
Oishi’s son was confined to the home of Matsudaira Sadanao, and the two had been allowed to meet on the evening prior to their deaths. On February 3, 1703, the Bakufu had issued orders that the men, being held in various daimyo homes, were to commit seppuku the following day. Four locations around Edo were decided on, and hastily prepared for the following days actions. Accepting the sentences as an honor, on Tuesday, March 20th, 1703, they performed the seppuku rituals.
Once the men had redeemed themselves through self-destruction, their decapitated bodies were folded into a fetal position, with their heads placed on their knees inside a round wooden tub-like coffin, and carried to the Sengaku-ji where they were buried.
On the gravestones of all the ronon is the kanji for “yaiba”, written刃 except for one, the ashigaru class foot soldier, Terasaka Kichiemon Nobuyuki, the 刃 kanji does not appear on his stone.
Terasaka was an ashigaru to the Ronin Yoshida clan. At the time of the attack, Terasaka was sent by Oishi Kuranosuke to inform the remaining Asano clan, including Lord Asano’s widow in modern-day Hyogo Prefecture, that the band of 47 Ronin had sought revenge on the death of their master. Because of his actions, he was pardoned by the shogun. There are claims that he was pardoned because of his young age, however, Oishi Kuranosuke’s son was at 14 or 15, even younger than Terasaka. It is probable that his rank, as ashigaru, did not include him having to commit seppuku.
Terasaka died 43 years after the incident aged 83. (some sources list 78, however a letter by his grandson survives in Kochi city confirming his grandfather’s age) He was later buried alongside his comrades in arms, however the kanji 刃 does not appear on his gravestone. He is reported to have become a Buddhist priest, serving at the Sengaku-ji and tending to the graves of his comrades following the incident.
The recent SEC rule changes for SPACs have sparked conversations across the investment landscape. While some view them as a blow to these “blank check companies,” I, like many others, remain optimistic about the future of SPACs as a valuable tool for entering the public market.
The new focus on enhanced disclosures and stricter projection guidelines is undoubtedly a positive step. Transparency is crucial for investor trust, and by requiring SPACs to shed more light on their operations, compensation structures, and target companies, the SEC is promoting more informed decision-making. However, as a SPAC operator myself, I believe the underlying power of this instrument remains strong. Let’s demystify the advantages:
1. A Smoother Path to Public Visibility: Compared to the traditional IPO marathon, SPACs offer a more controlled and predictable journey. Costs are upfront and transparent, eliminating the uncertainty of fundraising rounds. Moreover, having completed much of the due diligence during the SPAC formation stage, companies merging with SPACs can hit the ground running as publicly traded entities. This “IPO-as-a-Service” approach streamlines the process and minimizes disruption to business operations. 2. Leveraging Financial Strength: Strong balance sheets give companies an edge in navigating the capital markets. This pre-IPO stability translates into bolder and more achievable future projections, setting a solid foundation for their public life. With less financial pressure after merging, companies can focus on execution and delivering on their promises, building investor confidence. 3. Focus on Fundamentals over Hype: The increased disclosure requirements mandated by the SEC will help shift the focus from speculative projections to the company’s core strengths and potential. Investors will have access to richer data and deeper insights, enabling them to make informed decisions based on actual realities rather than hypothetical “moonshot” scenarios.
Of course, adapting to the new regulatory landscape will require strategic adjustments. As a recent SPAC operator who prioritized transparency throughout the merger process, I am confident that embracing these changes will ultimately strengthen the SPAC ecosystem and foster healthier, more sustainable ventures.
In conclusion, while the new SEC rules undoubtedly reshape the SPAC landscape, they represent an opportunity for growth and maturity. By embracing transparency, leveraging financial stability, and focusing on fundamentals, SPACs can remain a powerful tool for companies and investors alike. Let’s approach this evolution with optimism and work together to build a better, more informed public market for the future.
________
Disclaimer: The views and opinions expressed in this blog post are solely my own and do not necessarily reflect the official policy or position of any organization I am associated with. These views are personal and are provided for informational purposes only. The information presented here is accurate and true to the best of my knowledge, but there may be omissions, errors, or mistakes. Any reliance you place on the information from this blog/post is strictly at your own risk. I reserve the right to change, update, or remove content at any time. The content provided is my own opinion and not intended to malign any individual, group, organization, or anyone or anything.
While I am a big fan of the film, the last samurai, while watching it, there were a lot of parallels between history and real life, I was curious about the link and dug a little further. The Meji restoration in Japan was an interesting historical point in Japan’s history. The story of the last samurai also draws on those parallels.
I found the story of Jules Brunet. https://en.wikipedia.org/wiki/Jules_Brunet
Jules Brunet was sent to Japan to train their military in Western tactics before fighting for the samurai against Meiji Imperialists during the Boshin War.
Not many people know the true story of The Last Samurai, the sweeping Tom Cruise epic of 2003. His character, the noble Captain Algren, was actually primarily based on a real person: the French officer Jules Brunet.
Brunet was sent to Japan to train soldiers on how to use modern weapons and tactics. He later chose to stay and fight alongside the Tokugawa samurai in their resistance against Emperor Meiji and his move to modernize Japan.
But how much of this reality is represented in the blockbuster?
The True Story Of The The Last Samurai: The Boshin War
Japan of the 19th century was an isolated nation. Contact with foreigners was largely suppressed. But everything changed in 1853 when American naval commander Matthew Perry appeared in Tokyo’s harbor with a fleet of modern ships.
Wikimedia CommonsA painting of samurai rebel troops done by none other than Jules Brunet. Notice how the samurai have both western and traditional equipment, a point of the true story of The Last Samurai not explored in the movie.
For the first time ever, Japan was forced to open itself up to the outside world. The Japanese then signed a treaty with the U.S. the following year, the Kanagawa Treaty, which allowed American vessels to dock in two Japanese harbors. America also established a consul in Shimoda.
The event was a shock to Japan and consequently split its nation on whether it should modernize with the rest of the world or remain traditional. Thus followed the Boshin War of 1868-1869, also known as the Japanese Revolution, which was the bloody result of this split.
On one side was Japan’s Meiji Emperor, backed by powerful figures who sought to Westernize Japan and revive the emperor’s power. On the opposing side was the Tokugawa Shogunate, a continuation of the military dictatorship comprised of elite samurai which had ruled Japan since 1192.
Although the Tokugawa shogun, or leader, Yoshinobu, agreed to return power to the emperor, the peaceful transition turned violent when the Emperor was convinced to issue a decree that dissolved the Tokugawa house instead.
The Tokugawa shogun protested which naturally resulted in war. As it happens, 30-year-old French military veteran Jules Brunet was already in Japan when war broke out.
Wikimedia Commons Samurai of the Choshu clan during the Boshin War in late 1860s Japan.
Jules Brunet’s Role In The True Story Of The Last Samurai
Born on January 2, 1838, in Belfort, France, Jules Brunet followed a military career specializing in artillery. He first saw combat during the French intervention in Mexico from 1862 to 1864 where he was awarded the Légion d’honneur — the highest French military honor.
Wikimedia Commons Jules Brunet in full military dress in 1868.
Then, in 1867, Japan’s Tokugawa Shogunate requested help from Napoleon III’s Second French Empire in modernizing their armies. Brunet was sent as the artillery expert alongside a team of other French military advisors.
The group was to train the shogunate’s new troops on how to use modern weapons and tactics. Unfortunately for them, a civil war would break out just a year later between the shogunate and the imperial government.
On January 27, 1868, Brunet and Captain André Cazeneuve — another French military advisor in Japan — accompanied the shogun and his troops on a march to Japan’s capital city of Kyoto.
The shogun’s army was to deliver a stern letter to the Emperor to reverse his decision to strip the Tokugawa shogunate, or the longstanding elite, of their titles and lands.
However, the army was not allowed to pass and troops of the Satsuma and Choshu feudal lords — who were the influence behind the Emperor’s decree — were ordered to fire.
Thus began the first conflict of the Boshin War known as The Battle of Toba-Fushimi. Although the shogun’s forces had 15,000 men to the Satsuma-Choshu’s 5,000, they had one critical flaw: equipment.
While most of the imperial forces were armed with modern weapons such as rifles, howitzers, and Gatling guns, many of the shogunate’s soldiers were still armed with outdated weapons such as swords and pikes, as was the samurai custom.
The battle lasted for four days, but was a decisive victory for the imperial troops, leading many Japanese feudal lords to switch sides from the shogun to the emperor. Brunet and the Shogunate’s Admiral Enomoto Takeaki fled north to the capital city of Edo (modern-day Tokyo) on the warship Fujisan.
Living With The Samurai
Around this time, foreign nations — including France — vowed neutrality in the conflict. Meanwhile, the restored Meiji Emperor ordered the French advisor mission to return home, since they had been training the troops of his enemy — the Tokugawa Shogunate.
Wikimedia Commons The full samurai battle regalia a Japanese warrior would wear to war. 1860.
While most of his peers agreed, Brunet refused. He chose to stay and fight alongside the Tokugawa. The only glimpse into Brunet’s decision comes from a letter he wrote directly to French Emperor Napoleon III. Aware that his actions would be seen as either insane or treasonous, he explained that:
“A revolution is forcing the Military Mission to return to France. Alone I stay, alone I wish to continue, under new conditions: the results obtained by the Mission, together with the Party of the North, which is the party favorable to France in Japan. Soon a reaction will take place, and the Daimyos of the North have offered me to be its soul. I have accepted, because with the help of one thousand Japanese officers and non-commissioned officers, our students, I can direct the 50,000 men of the confederation.
The Fall Of The Samurai
In Edo, the imperial forces were victorious again largely in part to Tokugawa Shogun Yoshinobu’s decision to submit to the Emperor. He surrendered the city and only small bands of shogunate forces continued to fight back.
Wikimedia Commons The port of Hakodate in ca. 1930. The Battle of Hakodate saw 7,000 Imperial troops fight 3,000 shogun warriors in 1869.
Despite this, the commander of the shogunate’s navy, Enomoto Takeaki, refused to surrender and headed north in hopes to rally the Aizu clan’s samurai.
They became the core of the so-called Northern Coalition of feudal lords who joined the remaining Tokugawa leaders in their refusal to submit to the Emperor.
The Coalition continued to fight bravely against imperial forces in Northern Japan. Unfortunately, they simply didn’t have enough modern weaponry to stand a chance against the Emperor’s modernized troops. They were defeated by November 1868.
Around this time, Brunet and Enomoto fled north to the island of Hokkaido. Here, the remaining Tokugawa leaders established the Ezo Republic that continued their struggle against the Japanese imperial state.
By this point, it seemed as though Brunet had chosen the losing side, but surrender was not an option.
The last major battle of the Boshin War happened at the Hokkaido port city of Hakodate. In this battle that spanned half a year from December 1868 to June 1869, 7,000 Imperial troops battled against 3,000 Tokugawa rebels.
Wikimedia Commons French military advisors and their Japanese allies in Hokkaido. Back: Cazeneuve, Marlin, Fukushima Tokinosuke, Fortant. Front: Hosoya Yasutaro, Jules Brunet, Matsudaira Taro (vice-president of the Ezo Republic), and Tajima Kintaro.
Jules Brunet and his men did their best, but the odds were not in their favor, largely due to the technological superiority of the imperial forces.
Jules Brunet Escapes Japan
As a high-profile combatant of the losing side, Brunet was now a wanted man in Japan.
Fortunately, the French warship Coëtlogon evacuated him from Hokkaido just in time. He was then ferried to Saigon — at the time controlled by the French — and returned back to France.
Although the Japanese government demanded Brunet receive punishment for his support of the shogunate in the war, the French government did not budge because his story won the public’s support.
Instead, he was reinstated to the French Army after six months and participated in the Franco-Prussian War of 1870-1871, during which he was taken prisoner during the Siege of Metz.
Later on, he continued to play a major role in the French military, participating in the suppression of the Paris Commune in 1871.
Wikimedia CommonsJules Brunet had a long, successful military career after his time in Japan. He’s seen here (hat in hand) as Chief of Staff. Oct. 1, 1898.
Meanwhile, his former friend Enomoto Takeaki was pardoned and rose to the rank of vice-admiral in the Imperial Japanese Navy, using his influence to get the Japanese government to not only forgive Brunet but award him a number of medals, including the prestigious Order of the Rising Sun.
Over the next 17 years, Jules Brunet himself was promoted several times. From officer to general, to Chief of Staff, he had a thoroughly successful military career until his death in 1911. But he would be most remembered as one of the key inspirations for the 2003 film The Last Samurai.
Brunet’s daring, adventurous actions in Japan were one of the main inspirations for the 2003 film The Last Samurai.
In this film, Tom Cruise plays American Army officer Nathan Algren, who arrives in Japan to help train Meiji government troops in modern weaponry but becomes embroiled in a war between the samurai and the Emperor’s modern forces.
There are many parallels between the story of Algren and Brunet.
Both were Western military officers who trained Japanese troops in the use of modern weapons and ended up supporting a rebellious group of samurai who still used mainly traditional weapons and tactics. Both also ended up being on the losing side.
But there are many differences as well. Unlike Brunet, Algren was training the imperial government troops and joins the samurai only after he becomes their hostage.
Further, in the film, the samurai are sorely overmatched against the Imperials in regards to equipment. In the true story of The Last Samurai, however, the samurai rebels did actually have some western garb and weaponry thanks to the Westerners like Brunet who had been paid to train them.
Meanwhile, the storyline in the film is based on a slightly later period in 1877 once the emperor was restored in Japan following the fall of the shogunate. This period was called the Meiji Restoration and it was the same year as the last major samurai rebellion against Japan’s imperial government.
Wikimedia Commons In the true story of The Last Samurai, this final battle which is depicted in the film and shows Katsumoto/Takamori’s death, did actually happen. But it happened years after Brunet left Japan.
This rebellion was organized by the samurai leader Saigo Takamori, who served as the inspiration for The Last Samurai‘s Katsumoto, played by Ken Watanabe. In the true story of The Last Samurai, Watanabe’s character who resembles Takamori leads a great and final samurai rebellion called the final battle of Shiroyama. In the film, Watanabe’s character Katsumoto falls and in reality, so did Takamori.
This battle, however, came in 1877, years after Brunet had already left Japan.
More importantly, the film paints the samurai rebels as the righteous and honorable keepers of an ancient tradition, while the Emperor’s supporters are shown as evil capitalists who only care about money.
As we know in reality, the real story of Japan’s struggle between modernity and tradition was far less black and white, with injustices and mistakes on both sides.
The Real Motivations Of The Samurai
According to history professor Cathy Schultz, “Many samurai fought Meiji modernization not for altruistic reasons but because it challenged their status as the privileged warrior caste…The film also misses the historical reality that many Meiji policy advisors were former samurai, who had voluntarily given up their traditional privileges to follow a course they believed would strengthen Japan.”
You can read more here: https://allthatsinteresting.com/last-samurai-true-story-jules-brunet
In 1961, it officially became illegal to give someone a tattoo in New York City. But Thom deVita didn’t let this new restriction deter him from inking people. This ban existed till 1997.
What is the earliest evidence of tattoos?
In terms of tattoos on actual bodies, the earliest known examples were for a long time Egyptian and were present on several female mummies dated to c. 2000 B.C. But following the more recent discovery of the Iceman from the area of the Italian-Austrian border in 1991 and his tattoo patterns, this date has been pushed back a further thousand years when he was carbon-dated at around 5,200 years old.
Can you describe the tattoos on the Iceman and their significance?
Following discussions with my colleague Professor Don Brothwell of the University of York, one of the specialists who examined him, the distribution of the tattooed dots and small crosses on his lower spine and right knee and ankle joints correspond to areas of strain-induced degeneration, with the suggestion that they may have been applied to alleviate joint pain and were therefore essentially therapeutic. This would also explain their somewhat ‘random’ distribution in areas of the body which would not have been that easy to display had they been applied as a form of status marker.
What is the evidence that ancient Egyptians had tattoos?
There’s certainly evidence that women had tattoos on their bodies and limbs from figurines c. 4000-3500 B.C. to occasional female figures represented in tomb scenes c. 1200 B.C. and in figurine form c. 1300 B.C., all with tattoos on their thighs. Also small bronze implements identified as tattooing tools were discovered at the town site of Gurob in northern Egypt and dated to c. 1450 B.C. And then, of course, there are the mummies with tattoos, from the three women already mentioned and dated to c. 2000 B.C. to several later examples of female mummies with these forms of permanent marks found in Greco-Roman burials at Akhmim.
What function did these tattoos serve? Who got them and why?
Because this seemed to be an exclusively female practice in ancient Egypt, mummies found with tattoos were usually dismissed by the (male) excavators who seemed to assume the women were of “dubious status,” described in some cases as “dancing girls.” The female mummies had nevertheless been buried at Deir el-Bahari (opposite modern Luxor) in an area associated with royal and elite burials, and we know that at least one of the women described as “probably a royal concubine” was actually a high-status priestess named Amunet, as revealed by her funerary inscriptions.
And although it has long been assumed that such tattoos were the mark of prostitutes or were meant to protect the women against sexually transmitted diseases, I personally believe that the tattooing of ancient Egyptian women had a therapeutic role and functioned as a permanent form of amulet during the very difficult time of pregnancy and birth. This is supported by the pattern of distribution, largely around the abdomen, on top of the thighs and the breasts, and would also explain the specific types of designs, in particular the net-like distribution of dots applied over the abdomen. During pregnancy, this specific pattern would expand in a protective fashion in the same way bead nets were placed over wrapped mummies to protect them and “keep everything in.” The placing of small figures of the household deity Bes at the tops of their thighs would again suggest the use of tattoos as a means of safeguarding the actual birth, since Bes was the protector of women in labor, and his position at the tops of the thighs a suitable location. This would ultimately explain tattoos as a purely female custom.
Who made the tattoos?
Although we have no explicit written evidence in the case of ancient Egypt, it may well be that the older women of a community would create the tattoos for the younger women, as happened in 19th-century Egypt and happens in some parts of the world today.
What instruments did they use?
It is possible that an implement best described as a sharp point set in a wooden handle, dated to c. 3000 B.C. and discovered by archaeologist W.M.F. Petrie at the site of Abydos may have been used to create tattoos. Petrie also found the aforementioned set of small bronze instruments c. 1450 B.C.—resembling wide, flattened needles—at the ancient town site of Gurob. If tied together in a bunch, they would provide repeated patterns of multiple dots.
These instruments are also remarkably similar to much later tattooing implements used in 19th-century Egypt. The English writer William Lane (1801-1876) observed, “the operation is performed with several needles (generally seven) tied together: with these the skin is pricked in a desired pattern: some smoke black (of wood or oil), mixed with milk from the breast of a woman, is then rubbed in…. It is generally performed at the age of about 5 or 6 years, and by gipsy-women.”
What did these tattoos look like?
Most examples on mummies are largely dotted patterns of lines and diamond patterns, while figurines sometimes feature more naturalistic images. The tattoos occasionally found in tomb scenes and on small female figurines which form part of cosmetic items also have small figures of the dwarf god Bes on the thigh area.
What were they made of? How many colors were used?
Usually a dark or black pigment such as soot was introduced into the pricked skin. It seems that brighter colors were largely used in other ancient cultures, such as the Inuit who are believed to have used a yellow color along with the more usual darker pigments.
What has surprised you the most about ancient Egyptian tattooing?
That it appears to have been restricted to women during the purely dynastic period, i.e. pre-332 B.C. Also the way in which some of the designs can be seen to be very well placed, once it is accepted they were used as a means of safeguarding women during pregnancy and birth.
Can you describe the tattoos used in other ancient cultures and how they differ?
Among the numerous ancient cultures who appear to have used tattooing as a permanent form of body adornment, the Nubians to the south of Egypt are known to have used tattoos. The mummified remains of women of the indigenous C-group culture found in cemeteries near Kubban c. 2000-15000 B.C. were found to have blue tattoos, which in at least one case featured the same arrangement of dots across the abdomen noted on the aforementioned female mummies from Deir el-Bahari. The ancient Egyptians also represented the male leaders of the Libyan neighbors c. 1300-1100 B.C. with clear, rather geometrical tattoo marks on their arms and legs and portrayed them in Egyptian tomb, temple and palace scenes.
The Scythian Pazyryk of the Altai Mountain region were another ancient culture which employed tattoos. In 1948, the 2,400 year old body of a Scythian male was discovered preserved in ice in Siberia, his limbs and torso covered in ornate tattoos of mythical animals. Then, in 1993, a woman with tattoos, again of mythical creatures on her shoulders, wrists and thumb and of similar date, was found in a tomb in Altai. The practice is also confirmed by the Greek writer Herodotus c. 450 B.C., who stated that amongst the Scythians and Thracians “tattoos were a mark of nobility, and not to have them was testimony of low birth.”
Accounts of the ancient Britons likewise suggest they too were tattooed as a mark of high status, and with “divers shapes of beasts” tattooed on their bodies, the Romans named one northern tribe “Picti,” literally “the painted people.”
Yet amongst the Greeks and Romans, the use of tattoos or “stigmata” as they were then called, seems to have been largely used as a means to mark someone as “belonging” either to a religious sect or to an owner in the case of slaves or even as a punitive measure to mark them as criminals. It is therefore quite intriguing that during Ptolemaic times when a dynasty of Macedonian Greek monarchs ruled Egypt, the pharaoh himself, Ptolemy IV (221-205 B.C.), was said to have been tattooed with ivy leaves to symbolize his devotion to Dionysus, Greek god of wine and the patron deity of the royal house at that time. The fashion was also adopted by Roman soldiers and spread across the Roman Empire until the emergence of Christianity, when tattoos were felt to “disfigure that made in God’s image” and so were banned by the Emperor Constantine (A.D. 306-373).
We have also examined tattoos on mummified remains of some of the ancient pre-Columbian cultures of Peru and Chile, which often replicate the same highly ornate images of stylized animals and a wide variety of symbols found in their textile and pottery designs. One stunning female figurine of the Naszca culture has what appears to be a huge tattoo right around her lower torso, stretching across her abdomen and extending down to her genitalia and, presumably, once again alluding to the regions associated with birth. Then on the mummified remains which have survived, the tattoos were noted on torsos, limbs, hands, the fingers and thumbs, and sometimes facial tattooing was practiced.
With extensive facial and body tattooing used among Native Americans, such as the Cree, the mummified bodies of a group of six Greenland Inuit women c. A.D. 1475 also revealed evidence for facial tattooing. Infrared examination revealed that five of the women had been tattooed in a line extending over the eyebrows, along the cheeks and in some cases with a series of lines on the chin. Another tattooed female mummy, dated 1,000 years earlier, was also found on St. Lawrence Island in the Bering Sea, her tattoos of dots, lines and hearts confined to the arms and hands.
Evidence for tattooing is also found amongst some of the ancient mummies found in China’s Taklamakan Desert c. 1200 B.C., although during the later Han Dynasty (202 B.C.-A.D. 220), it seems that only criminals were tattooed.
Japanese men began adorning their bodies with elaborate tattoos in the late A.D. 3rd century.
The elaborate tattoos of the Polynesian cultures are thought to have developed over millennia, featuring highly elaborate geometric designs, which in many cases can cover the whole body. Following James Cook’s British expedition to Tahiti in 1769, the islanders’ term “tatatau” or “tattau,” meaning to hit or strike, gave the west our modern term “tattoo.” The marks then became fashionable among Europeans, particularly so in the case of men such as sailors and coal-miners, with both professions which carried serious risks and presumably explaining the almost amulet-like use of anchors or miner’s lamp tattoos on the men’s forearms.
What about modern tattoos outside of the western world?
Modern Japanese tattoos are real works of art, with many modern practioners, while the highly skilled tattooists of Samoa continue to create their art as it was carried out in ancient times, prior to the invention of modern tattooing equipment. Various cultures throughout Africa also employ tattoos, including the fine dots on the faces of Berber women in Algeria, the elaborate facial tattoos of Wodabe men in Niger and the small crosses on the inner forearms which mark Egypt’s Christian Copts.
What do Maori facial designs represent?
In the Maori culture of New Zealand, the head was considered the most important part of the body, with the face embellished by incredibly elaborate tattoos or ‘moko,’ which were regarded as marks of high status. Each tattoo design was unique to that individual and since it conveyed specific information about their status, rank, ancestry and abilities, it has accurately been described as a form of id card or passport, a kind of aesthetic bar code for the face. After sharp bone chisels were used to cut the designs into the skin, a soot-based pigment would be tapped into the open wounds, which then healed over to seal in the design. With the tattoos of warriors given at various stages in their lives as a kind of rite of passage, the decorations were regarded as enhancing their features and making them more attractive to the opposite sex.
Although Maori women were also tattooed on their faces, the markings tended to be concentrated around the nose and lips. Although Christian missionaries tried to stop the procedure, the women maintained that tattoos around their mouths and chins prevented the skin becoming wrinkled and kept them young; the practice was apparently continued as recently as the 1970s.
Why do you think so many cultures have marked the human body and did their practices influence one another?
In many cases, it seems to have sprung up independently as a permanent way to place protective or therapeutic symbols upon the body, then as a means of marking people out into appropriate social, political or religious groups, or simply as a form of self-expression or fashion statement.
Yet, as in so many other areas of adornment, there was of course cross-cultural influences, such as those which existed between the Egyptians and Nubians, the Thracians and Greeks and the many cultures encountered by Roman soldiers during the expansion of the Roman Empire in the final centuries B.C. and the first centuries A.D. And, certainly, Polynesian culture is thought to have influenced Maori tattoos.
As reports and images from European explorers’ travels in Polynesia reached Europe, the modern fascination with tattoos began to take hold. Although the ancient peoples of Europe had practiced some forms of tattooing, it had disappeared long before the mid-1700s. Explorers returned home with tattooed Polynesians to exhibit at world fairs, in lecture halls and in dime museums, to demonstrate the height of European civilization compared to the “primitive natives” of Polynesia. But the sailors on their ships also returned home with their own tattoos.
Native practitioners found an eager clientele among sailors and others visitors to Polynesia. Colonial ideology dictated that the tattoos of the Polynesians were a mark of their primitiveness. The mortification of their skin and the ritual of spilling blood ran contrary to the values and beliefs of European missionaries, who largely condemned tattoos. Although many forms of traditional Polynesian tattoo declined sharply after the arrival of Europeans, the art form, unbound from tradition, flourished on the fringes of European society.
In the United States, technological advances in machinery, design and color led to a unique, all-American, mass-produced form of tattoo. Martin Hildebrandt set up a permanent tattoo shop in New York City in 1846 and began a tradition by tattooing sailors and military servicemen from both sides of the Civil War. In England, youthful King Edward VII started a tattoo fad among the aristocracy when he was tattooed before ascending to the throne. Both these trends mirror the cultural beliefs that inspired Polynesian tattoos: to show loyalty and devotion, to commemorate a great feat in battle, or simply to beautify the body with a distinctive work of art.
The World War II era of the 1940s was considered the Golden Age of tattoo due to the patriotic mood and the preponderance of men in uniform. But would-be sailors with tattoos of naked women weren’t allowed into the navy and tattoo artists clothed many of them with nurses’ dresses, Native-American costumes or the like during the war. By the 1950s, tattooing had an established place in Western culture, but was generally viewed with distain by the higher reaches of society. Back alley and boardwalk tattoo parlors continued to do brisk business with sailors and soldiers. But they often refused to tattoo women unless they were twenty-one, married and accompanied by their spouse, to spare tattoo artists the wrath of a father, boyfriend or unwitting husband.
Today tattooing is recognized as a legitimate art form.
Today, tattooing is recognized as a legitimate art form that attracts people of all walks of life and both sexes. Each individual has his or her own reasons for getting a tattoo; to mark themselves as a member a group, to honor loved ones, to express an image of themselves to others. With the greater acceptance of tattoos in the West, many tattoo artists in Polynesia are incorporating ancient symbols and patterns into modern designs. Others are using the technical advances in tattooing to make traditional tattooing safer and more accessible to Polynesians who want to identify themselves with their culture’s past.
Humans have marked their bodies with tattoos for thousands of years. These permanent designs—sometimes plain, sometimes elaborate, always personal—have served as amulets, status symbols, declarations of love, signs of religious beliefs, adornments and even forms of punishment. Joann Fletcher, research fellow in the department of archaeology at the University of York in Britain, describes the history of tattoos and their cultural significance to people around the world, from the famous ” Iceman,” a 5,200-year-old frozen mummy, to today’s Maori.
The hamburger is one of the world’s most popular foods, with nearly 50 billion served up annually in the United States alone. Although the humble beef-patty-on-a-bun is technically not much more than 100 years old, it’s part of a far greater lineage, linking American businessmen, World War II soldiers, German political refugees, medieval traders and Neolithic farmers.
The groundwork for the ground-beef sandwich was laid with the domestication of cattle (in Mesopotamia around 10,000 years ago), and with the growth of Hamburg, Germany, as an independent trading city in the 12th century, where beef delicacies were popular.
1121 – 1209 – Genghis Khan (1162-1227), crowned the “emperor of all emperors,” and his army of fierce Mongol horsemen, known as the “Golden Horde,” conquered two thirds of the then known world. The Mongols were a fast-moving, cavalry-based army that rode small sturdy ponies. They stayed in their saddles for long period of time, sometimes days without ever dismounting. They had little opportunity to stop and build a fire for their meal.
The entire village would follow behind the army on great wheeled carts they called “yurts,” leading huge herds of sheep, goats, oxen, and horses. As the army needed food that could be carried on their mounts and eaten easily with one hand while they rode, ground meat was the perfect choice. They would use scrapings of lamb or mutton which were formed into flat patties. They softened the meat by placing them under the saddles of their horses while riding into battle. When it was time to eat, the meat would be eaten raw, having been tenderized by the saddle and the back of the horse.
1238 – When Genghis Khan’s grandson, Khubilai Khan (1215-1294), invaded Moscow, they naturally brought their unique dietary ground meat with them. The Russians adopted it into their own cuisine with the name “Steak Tartare,” (Tartars being their name for the Mongols). Over many years, Russian chefs adapted and developed this dish and refining it with chopped onions and raw eggs.
5th Century
Beginning in the fifteenth century, minced beef was a valued delicacy throughout Europe. Hashed beef was made into sausage in several different regions of Europe.
1600s – Ships from the German port of Hamburg, Germany began calling on Russian port. During this period the Russian steak tartare was brought back to Germany and called “tartare steak.”
18th and 19th Centuries
Jump ahead to 1848, when political revolutions shook the 39 states of the German Confederation, spurring an increase in German immigration to the United States. With German people came German food: beer gardens flourished in American cities, while butchers offered a panoply of traditional meat preparations. Because Hamburg was known as an exporter of high-quality beef, restaurants began offering a “Hamburg-style” chopped steak.
Hamburg Steak:
In the late eighteenth century, the largest ports in Europe were in Germany. Sailors who had visited the ports of Hamburg, Germany and New York, brought this food and term “Hamburg Steak” into popular usage. To attract German sailors, eating stands along the New York city harbor offered “steak cooked in the Hamburg style.”
Immigrants to the United States from German-speaking countries brought with them some of their favorite foods. One of them was Hamburg Steak. The Germans simply flavored shredded low-grade beef with regional spices, and both cooked and raw it became a standard meal among the poorer classes. In the seaport town of Hamburg, it acquired the name Hamburg steak. Today, this hamburger patty is no longer called Hamburg Steak in Germany but rather “Frikadelle,” “Frikandelle” or “Bulette,” orginally Italian and French words.
According to Theodora Fitzgibbon in her book The Food of the Western World – An Encyclopedia of food from North American and Europe:
The originated on the German Hamburg-Amerika line boats, which brought emigrants to America in the 1850s. There was at that time a famous Hamburg beef which was salted and sometimes slightly smoked, and therefore ideal for keeping on a long sea voyage. As it was hard, it was minced and sometimes stretched with soaked breadcrumbs and chopped onion. It was popular with the Jewish emigrants, who continued to make Hamburg steaks, as the patties were then called, with fresh meat when they settled in the U.S.
The cookbooks:
1758 – By the mid-18th century, German immigrants also begin arriving in England. One recipe, titled “Hamburgh Sausage,” appeared in Hannah Glasse’s 1758 English cookbook called The Art of Cookery Made Plain and Easy. It consisted of chopped beef, suet, and spices. The author recommended that this sausage be served with toasted bread. Hannah Glasse’s cookbook was also very popular in Colonial America, although it was not published in the United States until 1805. This American edition also contained the “Hamburgh Sausage” recipe with slight revisions.
1844 – The original Boston Cooking School Cook Book, by Mrs. D.A. Lincoln (Mary Bailey), 1844 had a recipe for Broiled Meat Cakes and also Hamburgh Steak:
Broiled Meat Cakes – Chop lean, raw beef quite fine. Season with salt, pepper, and a little chopped onion, or onion juice. Make it into small flat cakes, and broil on a well-greased gridiron or on a hot frying pan. Serve very hot with butter or Maitre de’ Hotel sauce.
Hamburgh Steak – Pound a slice of round steak enough to break the fibre. Fry two or three onions, minced fine, in butter until slightly browned. Spread the onions over the meat, fold the ends of the meat together, and pound again, to keep the onions in the middle. Broil two or three minutes. Spread with butter, salt, and pepper.
1894 – In the 1894 edition of the book The Epicurean: A Complete Treatise of Analytical & Practical Studies, by Charles Ranhofer (1836-1899), chef at the famous Delmonico’s restaurant in New York, there is a listing for Beef Steak Hamburg Style. The dish is also listed in French as Bifteck Hambourgeoise. What made his version unique was that the recipe called for the ground beef to be mixed with kidney and bone marrow:
One pound of tenderloin beef free of sinews and fat; chop it up on a chopping block with four ounces of beef kidney suet, free of nerves and skin or else the same quantity of marrow; add one ounce of chopped onions fried in butter without attaining color; season all with salt, pepper and nutmeg, and divide the preparation into balls, each one weighing four ounces; flatten them down, roll them in bread-crumbs and fry them in a sautpan in butter. When of a fine color on both sides, dish them up pouring a good thickened gravy . . . over.”
1906 – Upton Sinclair (1878-1968), American novelist, wrote in his book called The Jungle, which told of the horrors of Chicago meat packing plants. This book caused much distrust in the United States regarding chopped meat. Sinclair was surprised that the public missed the main point of his impressionistic fiction and took it to be an indictment of unhygienic conditions of the meat packing industry. This caused people to not trust chopped meat for several years.
Invention of Meat Choppers:
Referring to ground beef as hamburger dates to the invention of the mechanical meat choppers during the 1800s. It was not until the early nineteenth century that wood, tin, and pewter cylinders with wooden plunger pushers became common. Steve Church of Ridgecrest, California uncovered some long forgotten U. S. patents on Meat Cutters:
In mid-19th-century America, preparations of raw beef that had been chopped, chipped, ground or scraped were a common prescription for digestive issues. After a New York doctor, James H. Salisbury suggested in 1867 that cooked beef patties might be just as healthy, cooks and physicians alike quickly adopted the “Salisbury Steak”. Around the same time, the first popular meat grinders for home use became widely available (Salisbury endorsed one called the American Chopper) setting the stage for an explosion of readily available ground beef.
The hamburger seems to have made its jump from plate to bun in the last decades of the 19th century, though the site of this transformation is highly contested. Lunch wagons, fair stands and roadside restaurants in Wisconsin, Connecticut, Ohio, New York and Texas have all been put forward as possible sites of the hamburger’s birth. Whatever its genesis, the burger-on-a-bun found its first wide audience at the 1904 St. Louis World’s Fair, which also introduced millions of Americans to new foods ranging from waffle ice cream cones and cotton candy to peanut butter and iced tea.
Two years later, though, disaster struck in the form of Upton Sinclair’s journalistic novel The Jungle, which detailed the unsavory side of the American meatpacking industry. Industrial ground beef was easy to adulterate with fillers, preservatives and meat scraps, and the hamburger became a prime suspect.
The history of the American burger:
The hamburger might have remained on the seamier margins of American cuisine were it not for the vision of Edgar “Billy” Ingram and Walter Anderson, who opened their first White Castle restaurant in Kansas in 1921. Sheathed inside and out in gleaming porcelain and stainless steel, White Castle countered hamburger meat’s low reputation by becoming bastions of cleanliness, health and hygiene (Ingram even commissioned a medical school study to show the health benefits of hamburgers). His system, which included on-premise meat grinding, worked well and was the inspiration for other national hamburger chains founded in the boom years after World War II: McDonald’s and In-N-Out Burger (both founded in 1948), Burger King (1954) and Wendy’s (1969).
Only one of the claimants below served their hamburgers on a bun – Oscar Weber Bilby in 1891. The rest served them as sandwiches between two slices of bread.
Most of the following stories on the history of the hamburgers were told after the fact and are based on the recollections of family members. For many people, which story or legend you believe probably depends on where you are from. You be the judge! The claims are as follows:
1885 – Charlie Nagreen of Seymour, Wisconsin – At the age of 15, he sold hamburgers from his ox-drawn food stand at the Outagamie County Fair. He went to the Outagamie County Fair and set up a stand selling meatballs. Business wasn’t good and he quickly realized that it was because meatballs were too difficult to eat while strolling around the fair. In a flash of innovation, he flattened the meatballs, placed them between two slices of bread and called his new creation a hamburger. He was known to many as “Hamburger Charlie.” He returned to sell hamburgers at the fair every year until his death in 1951, and he would entertain people with guitar and mouth organ and his jingle:
Hamburgers, hamburgers, hamburgers hot; onions in the middle, pickle on top. Makes your lips go flippity flop.
The town of Seymour, Wisconsin is so certain about this claim that they even have a Hamburger Hall of Fame that they built as a tribute to Charlie Nagreen and the legacy he left behind. The town claims to be “Home of the Hamburger” and holds an annual Burger Festival on the first Saturday of August each year. Events include a ketchup slide, bun toss, and hamburger-eating contest, as well as the “world’s largest hamburger parade.”
On May 9, 2007, members of the Wisconsin legislature declared Seymour, Wisconsin, as the home of the hamburger:
Whereas, Seymour, Wisconsin, is the right home of the hamburger; and,
Whereas, other accounts of the origination of the hamburger trace back only so far as the 1880s, while Seymour’s claim can be traced to 1885; and,
Whereas, Charles Nagreen, also known as Hamburger Charlie, of Seymour, Wisconsin, began calling ground beef patties in a bun “hamburgers” in 1885; and,
Whereas, Hamburger Charlie first sold his world-famous hamburgers at age 15 at the first Seymour Fair in 1885, and later at the Brown and Outagamie county fairs; and,
Whereas, Hamburger Charlie employed as many as eight people at his famous hamburger tent, selling 150 pounds of hamburgers on some days; and,
Whereas, the hamburger has since become an American classic, enjoyed by families and backyard grills alike; now, therefore, be it
Resolved by the assembly, the senate concurring, That the members of the Wisconsin legislature declare Seymour, Wisconsin, the Original Home of the Hamburger.
1885 – The family of Frank and Charles Menches from Akron, Ohio, claim the brothers invented the hamburger while traveling in a 100-man traveling concession circuit at events (fairs, race meetings, and farmers’ picnics) in the Midwest in the early 1880s. During a stop at the Erie County Fair in Hamburg, New York, the brothers ran out of pork for their hot sausage patty sandwiches. Because this happened on a particularly hot day, the local butchers stop slaughtering pigs. The butcher suggested that they substitute beef for the pork. The brothers ground up the beef, mixed it with some brown sugar, coffee, and other spices and served it as a sandwich between two pieces of bread. They called this sandwich the “hamburger” after Hamburg, New York where the fair was being held. According to family legend, Frank didn’t really know what to call it, so he looked up and saw the banner for the Hamburg fair and said, “This is the hamburger.” In Frank’s 1951 obituary in The Los Angeles Times, he is acknowledged him as the ”inventor” of the hamburger.
Hamburg held its first Burgerfest in 1985 to mark the 100th anniversary of the birth of the hamburger after organizers discovered a history book detailing the burger’s origins.
In 1991, Menches and his siblings stumbled across the original recipe among some old papers their great-grandmother left behind. After selling their burgers at county fairs for a few years, the family opened up the Menches Bros. Restaurant in Akron, Ohio. The Menches family is still in the restaurant business and still serving hamburgers in Ohio.
On May 28, 2005, the town of Akron, Ohio hosted the First Annual National Hamburger Festival to celebrate the 120th Anniversary of the invention of the hamburger. The festival will be dedicated to Frank and Charles Menches. That is how sure the city of Akron is on the Menches’ family claim on the contested contention that two residents invented the hamburger. The Ohio legislature is also considering making hamburgers the state food.
1891 – The family of Oscar Weber Bilby claim the first-known hamburger on a bun was served on Grandpa Oscar’s farm just west of Tulsa, Oklahoma in 1891. The family says that Grandpa Oscar was the first to add the bun, but they concede that hamburger sandwiches made with bread may predate Grandpa Oscar’s famous hamburger.
Michael Wallis, travel writer and reporter for Oklahoma Today magazine, did an extensive search in 1995 for the true origins of the hamburger and determined that Oscar Weber Bilby himself was the creator of the hamburger as we know it. According to Wallis’s 1995 article, Welcome To Hamburger Heaven, in an interview with Harold Bilby:
The story has been passed down through the generations like a family Bible. “Grandpa himself told me that it was in June of 1891 when he took up a chunk of iron and made himself a big ol’ grill,” explains Harold. “Then the next month on the Fourth of July he built a hickory wood fire underneath that grill, and when those coals were glowing hot, he took some ground Angus meat and fired up a big batch of hamburgers. When they were cooked all good and juicy, he put them on my Grandma Fanny’s homemade yeast buns – the best buns in all the world, made from her own secret recipe. He served those burgers on buns to neighbors and friends under a grove of pecan trees . . . They couldn’t get enough, so Grandpa hosted another big feed. He did that every Fourth of July, and sometimes as many as 125 people showed up.”
Simple math supports Harold Bilby’s contention that if his Grandpa served burgers on Grandma Fanny’s buns in 1891, then the Bilbys eclipsed the St. Louis World’s Fair vendors by at least thirteen years. That would make Oklahoma the cradle of the hamburger. “There’s not even the trace of a doubt in my mind,” say Harold. “My grandpa invented the hamburger on a bun right here in what became Oklahoma, and if anybody wants to say different, then let them prove otherwise.”
In 1933, Oscar and his son, Leo, opened the family’s first hamburger stand in Tulsa, Oklahoma, called Weber’s Superior Root Beer Stand. They still use the same grill used in 1891, with one minor variation, the wood stove has been converted to natural gas. In a letter to me, Linda Stradley, dated July 31, 2004, Rick Bilby states the following:
My great-grandfather, Oscar Weber Bilby invented the hamburger on July 4, 1891. He served ground beef patties that were seared to perfection on a open flame from a hand-made grill. My great-grandmother Fanny made her own home-made yeast hamburger buns to put around the ground beef patties. They served this new sandwich along with their tasty home-made rood beer which was also carbonated with yeast. People would come for all over the county on July 4th each year to consume and enjoy these treats. To this day we still cook our hamburger on grandpa’s grill, which is now fired by natural gas.
On April 13, 1995, Governor Frank Keating of Oklahoma proclaimed that the real birthplace of the hamburger on the bun, was created and consumed in Tulsa in 1891. The State of Oklahoma Proclamation states:
Whereas, scurrilous rumors have credited Athens, Texas, as the birthplace of the hamburger, claiming for that region south of the Red River commonly known as Baja Oklahoma a fame and renown which are hardly its due; and
Whereas, the Legislature of Baja Oklahoma has gone so far as to declare April 3, 1995, to be Athens Day at the State Capitol, largely on the strength of this bogus claim, and
Whereas, while the residents, the scenery, the hospitality and the food found in Athens are no doubt superior to those in virtually any other locale, they must be recognized. In the words of Mark Twain, as “the lightning bug is to the lightning” when compared with the Great City of Tulsa in the Great State of Oklahoma; and
Whereas, although someone in Athens, in the 1860’s, may have place cooked ground beef between two slices of bread, this minor accomplishment can in no way be regarded comes on a bun accompanied by such delight as pickles, onions, lettuce, tomato, cheese and, in some cases, special sauce; and
Whereas, the first true hamburger on a bun, as meticulous research shows, was created and consumed in Tulsa in 1891 and was only copied for resale at the St. Louis World’s Fair a full 13 years after that momentous and history-making occasion:
Now Therefore, I, Frank Keating, Governor of the State of Oklahoma, do hereby proclaim April 12, 1995, as THE REAL BIRTHPLACE OF THE HAMBURGER IN TULSA DAY.
1900 – Louis Lassen of New Haven, Connecticut is also recorded as serving the first “burger” at his New Haven luncheonette called Louis’ Lunch Wagon. Louis ran a small lunch wagon selling steak sandwiches to local factory workers. A frugal business man, he did not like to waste the excess beef from his daily lunch rush. It is said that he ground up some scraps of beef and served it as a sandwich, the sandwich was sold between pieces of toasted bread, to a customer who was in a hurry and wanted to eat on the run.
Kenneth Lassen, Louis’ grandson, was quoted in the September 25, 1991 Athens Daily Review as saying;
“We have signed, dated and notarized affidavits saying we served the first hamburger sandwiches in 1900. Other people may have been serving the steak but there’s a big difference between a hamburger steak and a hamburger sandwich.”
In the mid-1960s, the New Haven Preservation Trust placed a plaque on the building where Louis’ Lunch is located proclaiming Louis’ Lunch to be the first place the hamburger was sold.
Louis’ Lunch is still selling their hamburgers from a small brick building in New Haven. The sandwich is grilled vertically in antique gas grills and served between pieces of toast rather than a bun, and refuse to provide mustard or ketchup.
Library of Congress named Louis’ Lunch a “Connecticut Legacy.” The following is taken from the Congressional Record, 27 July 2000, page E1377:
Honoring Louis’ Lunch on Its 105th Anniversary – Representative Rosa L. DeLauro:
. . . it is with great pleasure that I rise today to celebrate the 105th anniversary of a true New Haven landmark: Louis’ Lunch. Recently the Lassen family celebrated this landmark as well as the 100th anniversary of their claim to fame — the invention and commercial serving of one of America’s favorites, the hamburger . . . The Lassens and the community of New Haven shared unparalleled excitement when the Library of Congress named Louis’ Lunch a “Connecticut Legacy” — nothing could be more true.
1901 or 1902 – Bert W. Gary of Clarinda, Iowa, in an article by Paige Carlin for the Omaha World Herald newspaper, takes no credit for having invented it, but he stakes uncompromising claim to being the “daddy” of the hamburger industry. He served his hamburger on a bun:
The hamburger business all started about 1901 or 1902 (The Grays aren’t sure which) when Mr. Gray operated a little cafe on the east side of Clarinda’s Courthouse Square.
Mr. Gray recalled: “There was an old German here named Ail Wall (or Wahl, maybe) and he ran a butcher shop. One day he was stuffing bologna with a little hand machine, and he said to me: ‘Bert, why wouldn’t ground meat make a good sandwich?’”
“I said I’d try it, so I took this ground beef and mixed it with an egg batter and fried it. I couldn’t bet anybody to eat it. I quit the egg batter and just took the meat with a little flour to hold it together. The new technique paid off.”
“He almost ran the other cafes out of the sandwich business,” Mrs. Gray put in. “He could make hamburgers so nice and soft and juicy – better than I ever could,” she added.
“This old German, Wall, came over here from Hamburg, and that’s what he said to call it,” Mr. Gray explained. “I sold them for a nickel apiece in those days. That was when the meat was 10 or 12 cents a pound,” he added. “I bought $5 or $6 worth of meat at a time and I got three or four dozen pans of buns from the bakery a day.”
One time the Grays heard a conflicting claim by a man (somewhere in the northern part of the state) that he was the hamburger’s inventor. “I didn’t pay any attention to him,” Mr. Gray snorted. “I’ve got plenty of proof mine was the first,” he said.
so much more to read at https://whatscookingamerica.net/history/hamburgerhistory.htm
The first word that I learnt from this research – sartorialists – derived from the word Sartorial (adj) that of or relating to clothing or style or manner of dress.
Textured, solid, striped, botanical, jacquard, geometric, 52 to 58 inches long, alternately withering or widening from 3112 to 5 inches, costing anywhere from three for $10 to $100 or more.
Why has this apparently useless piece of silk, or wool, or rayon, or polyester or even rubber (yes, there are Rubber-Necker Ties, “a recycled fashion statement for the eco-executive”) survived the swings of fashion for more than three centuries? Why is it still fit to be tied?
Fashion observers say the necktie survives because it is the one formal accessory in the male wardrobe that expresses personality, mood or inner character. The tie is that splash of color, that distinctive pattern, that statement of individuality that a man can make in the world of uniform pinstripes and plaids.
The tie has been seen as a form of male chest display, recalling the chest-pounding and puffing of our prehistoric ancestors. Or it can be viewed as the noose around the neck of the conformist white-collar worker, or the symbolic leash held by women who purchased more than 50 percent of the 105 million ties sold in the United States last year. Although most American men do not wear ties daily, U.S. neckware sales totaled $1.6 billion last year, with 70 percent made by American companies.
The necktie originated in the 17th century, during the 30 year war in France. King Louis XIII hired Croatian mercenaries who wore a piece of cloth around their neck as part of their uniform. While these early neckties did serve a function (tying the top of their jackets that is), they also had quite a decorative effect – a look that King Louis was quite fond of. In fact, he liked it so much that he made these ties a mandatory accessory for Royal gatherings, and – to honor the Croatian soldiers – he gave this clothing piece the name “La Cravate” – the name for necktie in French to this day.
International Necktie Day is celebrated on October 18 in Croatia and in various cities around the world, including in Dublin, Tübingen, Como, Tokyo, Sydney and other town
The Evolution of Modern Necktie
The early cravats of the 17th century have little resemblance to today’s necktie, yet it was a style that stayed popular throughout Europe for over 200 years. The tie as we know it today did not emerge until the 1920s but since then has undergone many (often subtle) changes.
In the 2nd century A.D., Roman legionnaires probably didn’t think of themselves as reflecting a trend when they tied bands of cloth around their necks. Most likely, they were just looking for protection from the weather.
Some historians have called the legionnaires’ adornments the first neckwear. But others cite the excavation near the Chinese city of Xi’an of 3rd century B.C. terra-cotta statues of warriors who wore neck scarves in the belief that they were protecting the source of their strength, their Adam’s apples.
Most experts, however, date the initial appearance of the modern precursor of the tie to 1636. Croatian mercenaries, hired in Paris by King Louis XIV, wore cloth bands around their necks to ward off natural elements, which in their line of work included sword slashes.
Parisians quickly translated the Croats’ scarf into a new clothing accessory, and, voila!, the cravate was born. The French term cravate is derived from Croates, French for Croatian. Not to be outdone, the English adapted the cravat, dropping the final “e”, and the American colonies soon stepped in line.
Once launched, the cravat and its styles and knots proliferated. Early cravats looked like lace bibs with bows backing them up, some reaching two yards in length.
Among emerging varieties in the late 17th century was the Steinkirk, a corkscrew-like wrap, originating from the Battle of Steinkirk where startled French officers hastily twisted their ties as they fled their tents to turn back the British onslaught.
During the early 18th century and into the 19th century, cravats had major competition: the stock. While a cravat generally was a long piece of cloth that wound around the neck and tied in front, the stock resembled collars worn today for whiplash or other neck injuries.
Made of muslin, sometimes with cardboard stiffeners inside, stocks were fastened in back by a hook or knot. In front, they had what looked like a pretied bowtie or sometimes a wide cravat covering the stock and swathing the neck like a poultice. Stocks forced men to stand upright in a stiff posture.
American revolutionaries George Washington, Thomas Jefferson and the Adamses (John and John Quincy) can be seen in contemporary portraits by Gilbert Stuart and Charles Willson Peale, wearing swath-like cravats, although the American versions were less radical than those of their counterparts in France.
In the mid-1800s, the “solitaire” appeared — attached to the wig in the back, wrapped around the neck and brought to a bow in the front over a cravat.
Some other bizarre dress and tie styles emerged in the mid-18th century. In England, the so-called “Macaronis” were dandies affecting an Italian style, coloring their cheeks with rouge and wearing diamond-studded pumps and cravats with huge bows. The fashion may be alluded to in the lyrics to “Yankee Doodle Dandy.”
When more than 30% of your employees are debt collectors, you have to question – are you a lender or a collector. The following article came in Jalopnik – Read the original article here
When Don Foss started his career as a car salesman, he recognized early on that most of his prospective customers had shaky credit, leaving them with few options for financing to buy a vehicle. So in 1972, he started subprime auto lending company Credit Acceptance Corporation to fill that void. He knew lending money to buyers with low credit posed an inherent risk, and he knew the business couldn’t solely be focused on closing sales. It had to excel at collecting loan payments too.
Indeed, over time, the collections side of the business has transformed into a fundamental pillar of the Credit Acceptance model, sparking numerous government investigations and lawsuits over alleged deceptive practices, while exposing some of its customers to ceaseless debt.
But it’s even worse than many know. The extent of Credit Acceptance’s well-oiled debt collection machine is perhaps best illustrated in the company’s backyard: Detroit.
In 2017, one out of every eight civil lawsuits filed in Detroit’s 36th District Court, the largest district court in the state of Michigan, was a collection case brought by Credit Acceptance, according to an analysis of publicly available court records by Jalopnik. Credit Acceptance alone—a company meant to service subprime car loans under the cheerful motto of “We change lives!”—absolutely dominates the civil case volume of one of the country’s busiest courts.
The issue was raised in three reports by a legal transparency nonprofit group called PlainSite over the last year. PlainSite had the idea to scrape court records from Detroit’s 36th District court to obtain information about CAC. It also made its source code for analyzing the Detroit court records publicly available; Jalopnik independently verified and expanded on PlainSite’s methods by building our own software to scrape the records data, and conducting additional interviews.
Jalopnik’s analysis also raises the specter that Detroit’s court system, which teetered on insolvency just five years ago, is now staying financially afloat with help from the fees it collects in cases filed by Credit Acceptance’s debt collectors. The nonprofit Center for Responsible Lending, in a study earlier this year on debt collection suits clogging Oregon courts, pointed out that consumers there have to pay an appearance fee to the court before they can file a response to contest the debt, or other court fees.
“It costs to file a case in the first place, it costs to file a complaint, and then—as we found in Oregon—it costs a decent amount of money, a couple hundred dollars, to file a response,” said Lisa Stifler, deputy director of state policy at the Center for Responsible Lending. “It’s not all cases, but it’s money that, yes, if courts were funded by states more fully, then perhaps some of those fees wouldn’t need to be as high, because they have operating budgets that are covering some of those expenses. There’s some sense that court filing fees do end up paying for running the courts.”
Reached by phone, Foss deferred comment to Credit Acceptance. The lender didn’t respond to a request for an interview, and did not answer a list of detailed questions sent by Jalopnik.
In other words, the number of debt collection suits makes a bad situation in Detroit even worse. The loosely regulated auto lending market lets dealers arrange loans with exorbitantly high interest rates, financed by companies like Credit Acceptance.
High interest is associated with a higher chance of default, and if a low-income driver falls into default, loses their car to repossession, and then gets hit with a collections suit, they run the risk of Credit Acceptance garnishing up to 25 percent of whatever wages they’re earning, and possible insolvency. A study published in July found a borrower who loses their vehicle to repossession is twice as likely to file for bankruptcy.
“The real concern should be that borrowers who are unaware of the consequences of default/repossession are taken advantage of,” said Erik Mayer, a finance professor at Southern Methodist University and co-author of the study. “In some cases, lenders may know the borrower won’t be able to repay the loan in the end, but it may still be profitable to make the loan due to high interest rates, fees, and the ease with which they can repossess the car in the case of default.
By analyzing publicly available records from Detroit’s 36th District Court dating back to 1995, Jalopnik found:
Credit Acceptance filed at least 32,799 collection suits against 39,714 Detroit car buyers, more than 4 percent of all available civil cases.
In 2017, the company’s collection suits represented 12.18 percent of all 32,660 publicly available civil cases in Detroit, up from just 1.45 percent in 2007. (The court reports that it handled more than 43,000 civil cases last year, and says it’s “not responsible” for any omissions in the online 36th District Court Case Inquiry System. At that rate, Credit Acceptance still comprised over 9 percent—or nearly one in 10—of the court’s caseload last year).
Jalopnik counted a “case” as an action against one or more defendants; in some instances, two or more defendants are named in the same suit.
The company secured judgments against 6,556 defendants that were eventually paid off in full. 6,150 of these judgments were default judgments—meaning cases when the car buyer didn’t show up to defend themselves. Defendants sometimes didn’t show up in court because they weren’t even notified to appear, several consumers told Jalopnik. In those cases, Credit Acceptance garnished at least $27.5 million in wages and income-tax refunds.
Lawsuits against at least 33,158 Detroit car buyers remain pending. In those cases, Credit Acceptance has secured 22,802 default judgments worth at least $162.6 million. It’s unclear how much has been garnished and collected from those suits to date, but records show that 40 percent have been ongoing for at least 10 years, and at least 2,200 have been pending for more than 20 years.
The chart below represents Open Credit Acceptance Debt Collection cases (cases in which the defendant(s) has not yet satisfied the judgement(s) against them). The numbers on the left indicate how long the cases in that category have been open. The numbers at the bottom indicate how many defendants there are in all cases in a given category. The numbers atop each bar indicate how much money has been secured in judgements against all the defendants in all the cases in that category.
Each individual defendant is represented as a dot placed within the category that that defendant’s case belongs to. If you’re viewing this on a PC, hover your mouse over the dots to see the current status of Credit Acceptance’s case against this particular defendant.
It’s unclear exactly what led to the situation in Detroit, although the tough economic situation for the city and its residents in recent years has certainly contributed. The company has been investigated by regulators for potential wrongdoing, and it has faced accusations in cases across the U.S. of duping car buyers into taking on untenable loans, however no current probes in Michigan against Credit Acceptance appear to exist.
But what’s clear is that, in recent years, Credit Acceptance has sharply increased the number of debt collection cases it has filed in the Motor City—and in a state where its practices have been called into question before. Credit Acceptance’s main debt-collection attorney was indicted in 2005 for falsifying hundreds of court documents, claiming he’d notified consumers to appear in court when he hadn’t.
Credit Acceptance had also been accused in the past by a suburban Detroit court of providing insufficient documentation to support its requests to garnish borrowers’ wages. The court’s clerk had discovered reams of errors in its filings, but when Credit Acceptance sued the court for subjecting its garnishment requests to more scrutiny, the Michigan Supreme Court sided with the lender, leaving courts barely any leeway to substantively review the accuracy of its filings.
“With the Michigan Supreme Court case, it makes it very easy for them to do this,” Stifler said. “It’s like a lawsuit mill. They don’t need to make sure they have the paperwork in order or be absolutely sure that what they say is owed is actually owed. They have pretty free rein to file what they want.”
The company has long portrayed itself as a do-gooder, a lender of last resort for consumers who otherwise had no other options. But consumer advocates characterized Jalopnik’s findings about Detroit as alarming, and say it calls into question whether Credit Acceptance is even providing its customers with a sound loan product.
The figure “is pretty striking in terms of numbers,” Stifler said. “If you’re not putting out an affordable product, or if you’re putting out a predatory product and/or not looking at whether people can actually repay it,” she went on, “the fact that high collection lawsuits is not all that surprising.”
“It’s entirely structured to be about collection,” said Missouri attorney Bernard Brown, who has waged legal battles against Credit Acceptance since the 1990s. “That’s fundamental to their model.”
Drew Millitello didn’t start out his bankruptcy law career by filing cases on behalf of consumers who went broke. After graduating law school in 2009, with the economy in tatters, he initially worked on behalf of creditors looking to recoup whatever they could from bankrupt companies.
“Which is the opposite side of what I do now,” he said.
A few years on, the caseload from representing clients of auto lenders and the county treasurer’s office took a toll.
“When you’re sitting behind a desk and you’re signing motions you don’t really see the first-hand accounts of it,” Millitello said. “But when you are in front of the judge… that’s when you begin to see the human aspect of it.”
“That’s what drew me back to the other side,” he said. Millitello linked up with a few friends from high school and launched a consumer debtor bankruptcy firm called Detroit Lawyers, PLLC.
Shortly thereafter, the firm put up a brief blog post about Credit Acceptance for consumers who’d faced a repossession, a garnishment, or simply dealt with a high interest car loan for a vehicle that broke down. Immediately, Millitello said, potential clients started reaching out.
“It drives a lot of our clients into bankruptcy,” Millitello said of Credit Acceptance collection cases. Today, at least 25 percent of the firm’s active garnishment cases deal with Credit Acceptance, he said.
“Their whole business model is based on this,” he said.
A number of Millitello’s case files, provided to Jalopnik, offer insight into the characteristics of loans consumers who wind up in bankruptcy.
There’s a 2001 Ford Expedition, financed with a total $11,000 loan just last year, at 22.99 percent. A 2005 Mercury went to a Detroit resident with a $16,000 loan carrying a 24.99 percent interest rate. A 2009 Ford Escape went to another resident, also for a $16,000 loan at an interest rate of 23.99 percent.
(Keep in mind those numbers reflect the total cost of the loan including Credit Acceptance’s finance charges, not the base price for the car itself, which speaks to how egregious these financing offers can be.)
Low-credit buyers have few options to turn, and that’s why they’re stuck with loans that carry sky-high interest rates. If they fall behind, and their car gets repossessed, the effect can bury them, especially if Credit Acceptance takes them to court, Millitello said.
“When you’re being garnished the debt puts you in the corner,” he said. “They have families to support. Twenty-five percent of their wages, they cannot survive.”
Denita Anderson knows how hard it can be. The 24-year-old bought a 2005 Chevy Impala last year with a loan from Credit Acceptance, at 22.99 percent, so she would pay $15,300 in total over the course of five years.
Anderson bought into the Credit Acceptance motto. She’d had a car repossessed before, and the lender gave her a second chance at redeeming her credit score. Working for a janitorial service in metro Detroit, she needed a car, too. Her grandmother co-signed on the loan.
The Impala didn’t last long, she said. It got repossessed twice, the second time voluntarily because the car broke down on Interstate 94 outside of Detroit and repairs were unsuccessful afterward.
Even after voluntarily repossessing her car, Anderson continued making payments. Her work shifts were sporadic at the time, so if she couldn’t pay in full for a month, she immediately called the lender and worked out an arrangement. Still, she said she vowed to pay something—even for a car she no longer even had.
That apparently wasn’t enough. In December 2017, she received a letter in the mail notifying her that Credit Acceptance had secured a court order to have her wages garnished. The company started having nearly 25 percent of her wages docked per check.
“I was paying them,” she said. “Why even garnish me if I’m still giving you the money? I’m not even driving the vehicle.”
After talking it over with her mother, she decided to declare bankruptcy and reached out to Millitello.
“I was like, ‘I don’t want to file bankruptcy this young,’” she said. “But at the same time I can’t have them garnishing my check.”
In the cases that are still open, at least 171 defendants have filed for bankruptcy. The 36th district court has received 1,696 orders for bankruptcy stays from defendants in these cases.
Following the Great Recession, the rate of auto delinquencies continued to increase, as lenders loosened the purse strings for low-credit buyers to access credit for a car. Sales increased to new heights, but delinquency rates have jumped in tandem, hitting a record 6.3 million people are 90 days or more behind on their auto loan—an increase of 400,000 car buyers from a year prior.
Mayer, the professor from Southern Methodist University, said he and his colleagues were surprised by the lack of prior research on the consequences of repossessions for borrowers.
Using credit reports, court records, and demographic data, Mayer and his colleagues arrived at a pointed conclusion: in states with laws that make it easier for cars to be repossessed, subprime borrowers are more likely to get approved for a loan.
“Increased credit access for borrowers is essentially the ‘bright side’ of making auto repossession easy for lenders,” Mayer told Jalopnik.
Michigan didn’t meet all the criteria for the researchers definition of a state that makes repossessions “easy,” but that’s only because it requires a repo agent to obtain a license. It’s smooth-sailing otherwise. The study’s findings crystallized just how much of an impact of repossessions can have on buyers who wind up defaulting on their loan payments: approval rates on credit applications are reduced for two to three years. The same for mortgage credit, by up to five years.
Making auto repossession easy for lenders, Mayer said, “takes away some of their incentive to screen borrowers and only lend to those who can really afford the loan.”
“This puts the onus on borrowers to understand whether they will benefit from the loan and be able to repay it,” he said. “Weighing the benefits of an auto loan against the potential costs associated with default is a challenging task for many subprime borrowers. This problem is exacerbated by the fact that prospective borrowers often don’t know the full scope of the consequences of auto repossessions.”
If dealers aren’t aiding buyers in finding a car that fits their budget, that compounds an already difficult situation. If their bill gets sent to collections, the effects can be severe. And a study earlier this year found more and more consumers are trying to stick it out and deal with debt collectors instead of turning to a bankruptcy court for relief.
Lawless, the University of Illinois law professor who co-authored the study, called the period when someone’s struggling before filing bankruptcy the financial “sweatbox.” Those who endure more than two years of this, he said, are called the “long strugglers.” Their time in the sweatbox is “particularly damaging,” according to the study.
“During their years in the sweatbox, long strugglers deal with persistent collection calls, go without healthcare, food, and utilities, lose homes and other property, and yet remain ashamed of needing to file,” the study said. “For these people in particular, though time in the sweatbox undermines their ability to realize bankruptcy’s ‘fresh start.’”
There’s one feature that stands out the most among this particular crowd, Lawless told Jalopnik. “They’re most likely to have a debt collection filed right before bankruptcy,” he said.
“Obviously,” he went on, “nobody wants to file bankruptcy. Nobody wants to go to the hospital either, but if you’re sick or need an operation, you need to go to the hospital.”
One of the reasons debt collection suits have become more commonplace in recent years is partly due to the so-called information revolution, he said.
“It’s just easier to bring these lawsuits,” Lawless explained. “It’s easier to find the people, it’s easier to track the debt, it’s easier to keep the records, it’s easier to generate the paperwork that you need to process these lawsuits.”
More than 75 percent of consumers who responded to the researchers’ survey said they agreed to some extent that “pressure from debt collectors” contributed to their findings. At the same time, over the last decade, the study found that in-court debt collection has increased.
“No one’s trying to argue there should be an easy way to walk away from your obligations,” Lawless said. “But at some point you just can’t pay the debt.”
The standard line from the subprime lending world is that low-credit buyers receive financing with high-interest rates to compensate for the purported risk they pose. But the higher the rate, the higher chance of default, and so critics have taken to assert the system is wholly designed to set up consumers for failure.
Now, Americans hold more than $1.2 trillion in auto loan debt, and with delinquencies at a high rate, critics point to the lending practices and loan terms themselves as the main driver of defaults.
“While debt collection is an important way creditors recoup their losses, when a creditor such as Credit Acceptance relies on debt collection for such a significant portion of its loans, that is an indication that there are problems with the lending practices and loan terms,” Stifler, of the Center for Responsible Lending, said. “When borrowers are set up to fail by the unaffordable terms of a car loan, of course we will see many folks who are already struggling unable to keep up with the payments.”
Credit Acceptance’s debt collection efforts in Michigan ran into a roadblock in 2005, when a local Metro Detroit court returned “numerous” garnishment requests “loaded with apparent mistakes to the attorney who had filed them” on behalf of the lender, according to Human Rights Watch.
“He filed 60 or 70 garnishment requests in a single day,” William Richards, former chief judge of the 46th District Court in the City of Southfield, told the group. “There were thousands of dollars’ worth of errors.”
Richards’ clerk asked the lender’s attorney to correct errors and provide additional supporting documentation to support their requests. Instead of doing just that, Credit Acceptance sued the court, arguing the clerk had no right to request additional documentation.
When the case eventually made its way to the Michigan Supreme Court, the state’s highest-ranking judges ultimately sided with Credit Acceptance. In an opinion, the court noted that, “We recognize that [the district court] has an understandable interest in the rights of judgment debtors and in protecting them from writs of garnishment that are baseless or inflated.
“Nonetheless,” the judges went on, “the court rules do not allow the imposition of additional filing requirements on judgment creditors seeking writs of garnishment.”
Richards, who couldn’t be reached for comment, was unequivocal in explaining the impact of the decision at the time.
“We’ve got to have some role here,” he told Human Rights Watch. “We can’t just be rubber stamps.” But the ruling effectively stymied any efforts to apply scrutiny to the lender’s garnishment requests.
That might’ve been an especially tough pill to swallow for Richards, since Credit Acceptance’s main attorney got caught up in a scandal just a few years prior. In 2005, prosecutors indicted Howard Alan Katz on 308 counts of criminal contempt of court for falsifying hundreds of court records. Katz eventually struck a “no contest” plea deal on 136 of the counts that required him to spend six months under house arrest.
Katz, according to a news report at the time, filed fraudulent court documents that stated he’d notified a person when to appear in court, but in fact, he hadn’t.
When the accused defendant never showed up, Katz “sought and often got a default judgment from a judge,” the report said, “allowing him to collect the past-due money by garnisheeing the person’s wages.” Katz even had vehicles seized belonging to defendants before they even knew they’d been sued.
Katz, who couldn’t be reached for comment, denied any knowledge of the alleged wrongdoing and blamed the issue on a hired gun failing to properly attempt to serve the defendants with a suit.
Following Howard Katz’s indictment, records show a local attorney named Jason Michael Katz started routinely representing the lender in debt collection cases. Jason Michael Katz deferred comment to Credit Acceptance and wouldn’t say whether he’s related to the lender’s former attorney.
“I’m not going to answer any more questions about this,” he told Jalopnik.
Once the Howard Katz case was squared away, records show Credit Acceptance started filing more debt collection suits each year against Detroit residents. In 2007, the lender represented 1.45 percent of the 36th District Court’s total caseload. Five years later, it jumped to 5.21 percent. By last year, it reached 12.18 percent.
It’s a potentially startling reality for a court that, just five years ago, nearly became insolvent. The court that year posted an operating deficit of $4.5 million, leaving it facing “extraordinary challenges,” a report found.
In response, the Michigan Supreme Court appointed a state appellate judge, Michael Talbot, to address the issues. Talbot spent a year and a half making staff cuts and reclassifying positions, so workers would handle expanded duties, before turning the court back over to the local administrator.
The court is in better financial shape today, and it still handles one of the largest case volumes in the U.S.—including a significant amount of filings from Credit Acceptance. Talbot had no comment, when asked about Jalopnik’s findings and the implication that fees from the lender’s cases are helping 36th District stay afloat.
Nancy Blount, chief judge of 36th District, told Jalopnik by email only that: “Our court exists to resolve disputes and to do so in a neutral and just way. We are not a revenue generating organization. Our funding unit, the City of Detroit, has a statutory obligation to fund us whether we realize revenue or not.” (A spokesperson for the mayor’s office didn’t respond to requests for comment.)
A 2017 report on Credit Acceptance, which first highlighted the number of debt collection suits filed in Detroit, suggested that 36th District Court generated as much as $2 million in fees from the lender’s garnishment requests alone. The report from PlainSite was commissioned by an investor betting that Credit Acceptance’s stock price would tank. It suggested the company generated $2 million in fees for the court. (Blount disputed the finding, and said the figure would be much lower.)
Still, the fact Credit Acceptance’s cases now represent such a significant portion of the court’s total civil filings should be concerning to court officials, said Aaron Greenspan, PlainSite’s founder.
“From the court’s perspective, there’s no way this should be permissible,” Greenspan told Jalopnik. “It uses an immense amount of government resources to simply process the cases for this company.”
There are countless stories from across the U.S. of consumer experiences with Credit Acceptance.
But the personal toll a debt collection case can take on someone’s life—and how extreme the situation can get—is perhaps best exemplified several hundred miles away from Detroit, through Missouri resident Carrie Peel.
As gas prices skyrocketed during the throes of the economic crisis of 2008, Peel visited a dealer called Car Time and put down $1,000 for a used Ford Taurus, financing the remaining costs with a nearly $11,000 Credit Acceptance loan. With a low credit score, she had few options, so she accepted the loan at 24 percent interest, meaning she’d wind up paying a total of $17,850.
It made sense, as Peel described it to Jalopnik. Amid the worst months of the recession, Peel and her husband both lost their jobs, their house ended up in foreclosure, and with gas at $4 per gallon, they needed a more fuel efficient car.
“We were trying to reestablish our credit, and, unfortunately, because our credit scores were so low, we didn’t really have too many other options other than to go to a second-chance finance company,” Peel, 40, said in an interview.
That day, Peel signed a sales agreement to purchase the vehicle and drove off with the car, but Car Time never sent the vehicle’s title. When she headed back a few weeks later to get a copy, she discovered Car Time had closed up shop. Unable to afford another car, Peel was stuck in a frightening predicament.
Peel reached out to Credit Acceptance for help. But as a lengthy court record later demonstrated, she perhaps shouldn’t have bothered.
“As Peel continued to drive the unregistered car, she was stopped multiple times by the police and received tickets and penalties,” a judge wrote in a 2013 opinion. “She also became anxious and embarrassed over the situation, especially after being pulled over with her son and his friend in the car.”
Credit Acceptance said she’d need to file suit and secure a declaratory judgment to win back her title. But it wasn’t until Peel lost her job and she qualified for Legal Aid that she learned, under Missourilaw, “if a buyer is not provided with a title to the vehicle, the sale is void and the buyer is relieved of the obligation to make payments on the debt,” the judge wrote.
Operating in Missouri since 1992, one might expect Credit Acceptance would’ve known this. But after speaking with no less than 111 Credit Acceptance employees, Peel got nowhere. Instead, the lender insisted she had to continue paying the full amount of each payment as stipulated in her sales agreement.
“Never once was she permitted to speak to a supervisor even though she was promised many times,” Bernard Brown, her attorney, told Jalopnik.
When Peel finally connected with Brown, they dug in and took Credit Acceptance to court. What they learned was just how much debt collection means to the company.
Testimony from Credit Acceptance showed the company employs more than 400 collectors—about one-third of its total staff—to make calls and chase down defaulted buyers for loans. Buyers hamstrung by obviously difficult situations like Carrie Peel.
Jurors ultimately found Credit Acceptance had violated state laws in its handling of Peel’s situation, and awarded her $1.1 million in compensation. Judge Gary D. Witt, the appellate judge who wrote an opinion that later upheld the decision, was unequivocal about the lender’s actions.
“CAC contends that it is difficult to ascertain any harm that Peel suffered. CAC asserts that when the whole picture is taken into account, the only ‘real loss’ Peel sustained was the difference between driving a titled car and driving an untitled one,” Witt wrote. “This argument shows CAC’s continued indifference to Peel’s plight.”
The case perhaps explains why attorney Brown wasn’t surprised by the amount of cases filed in Detroit’s district court.
“This kind of stuff has been done across the country,” Brown told Jalopnik. “It’s an anomaly that [Brown’s co-attorney on the case] Dale Irwin and I were in Missouri and happened to fight this battle.”
Throughout the ordeal, Peel said she made her payments on time, each month.
“They’re such terrible people, they really are,” she said. “They tried to destroy me. They thought I was going to go away.”
Attorneys in the Detroit area said that conditions in the city are ripe for Credit Acceptance to mount such a high number of cases.
“When you live in Michigan, the roads are shit, so it’s expensive to keep a car from falling apart,” said Adam Taub, a Detroit-area attorney who handles auto loan-related cases. “This disproportionately affects the poor.”
Millitello, the bankruptcy attorney, called the increased number of collection cases a growing “crisis.”
“They know when they’re giving out these subprime loans that some of this income is going to come in from wage garnishments collections,” he said. “These cars are crapping out on [consumers]. They’re junk.”
It’s hard to say how Credit Acceptance’s founder Don Foss feels about the stories some of his consumers have shared. Foss retired in 2017.
Today, the small lender he took public 25 years ago is enjoying a warm reception from Wall Street, with its stock price jumping from $286 per share a year ago to the current price of $415 per share. The 74-year-old’s success allowed him to purchase a 13,000 square foot mansion in the Detroit suburb of Franklin. Across 3.5 acres, the Foss residence has nine bedrooms and six bathrooms.
He built his empire off a business that routinely drags residents in neighboring Detroit into court day in and day out, over cases that time and again are shown to be dubious at best, over cars like Denita Anderson’s busted Chevy Impala.
And so goes the cycle of Credit Acceptance.
“When the car falls apart, the consumer can’t afford to get to a job, school etc., and must abandon the vehicle and seek other transportation,” attorney Taub said. “CAC repos the vehicle, sells it at auction for what it’s worth… less repo and other fees, and this results in a judgment of at least the full amount financed along with any force placed insurance charges that accrued before the repo.”
“So it’s no wonder that there are so many CAC judgments in 36th District Court,” Taub lamented. “This is one way in which our society keeps people in economic servitude.”
_______________________________________________
Please find the original article here: https://jalopnik.com/how-a-subprime-auto-lender-consumed-detroit-with-debt-a-1829527899
While the economy was booming after the last downturn in 2009/10, the average loan being taken on by the consumer went up from around $25K to around $31K for a new car. All this while, adding about $1.2T ( Yes, Trillion) in auto loans.
I found an interesting article which can be read here:
Auto lending in the midst of an economic collapse is never pretty, as people stop paying on their car loans to divert money to more immediate necessities like food and shelter. This collapse is no different.
In an attempt to slow the spread of COVID-19, state governments have implemented social distancing guidelines, travel bans and restrictions, quarantines, shelter-in-place orders and shutdowns. These actions have caused economic hardship in the areas in which they have been implemented and have led to an increase in unemployment and resulted in many consumers delaying payments or re-allocating resources, leading to a significant decrease in our realized collections.
In short: an unspecified number of people have stopped paying on their car loans, the full extent of which Credit Acceptance can’t reveal yet.
If all of that is to be somewhat expected—Credit Acceptance’s business is lending money to people who might not be able to afford it long-term—a somewhat more alarming situation is in a different report out this week.
Exhibit B: Ally Financial, which was founded over a century as the lending arm of General Motors. It was known as GMAC until, er, the last economic crisis about a decade ago, a few years after GM sold a majority stake in the business. Today by volume it is one of the biggest auto-loan lenders in the country; Monday it said many of its more reliable borrowers weren’t so reliable anymore.
Ally Financial Inc. said 25% of its auto-loan customers have asked for payment deferrals, and the vast majority have never been delinquent before.
Of the 1.1 million borrowers who requested forbearance, more than three-quarters have never asked for a deferral before and 70% have never had a late payment with Ally, Chief Financial Officer Jennifer LaClair told analysts during a conference call Monday.
Exhibit C: Lenders like Volkswagen Credit, which said last week it was waiving payments for six months for some people who have lost their job because of the virus. It’s a preemptive admission that you can’t get blood from a stone, I guess. But that also came with some stipulations:
To qualify, unemployment must not occur within the first 90 days of ownership. The customer must have lost their job because of economic reasons and must be collecting unemployment benefits. Customers also must have been employed full time at least 12 consecutive weeks before job loss. The offer is good for 12 months from the date of purchase.
The program does not cover leases, and it is not available in New York.
The lending arms of most of the other big automakers are also offering various deferment plans for borrowers, which delay payments until the end of the loan. Edmunds says that most lenders prefer to deal with customers on a case-by-case basis. The best thing you can do if you can’t make your payment is to reach out and try and strike a deal. I myself fear that increasingly many borrowers won’t even be able to afford that.