fbpx

HomeBlogLearnIs AI the Final Invention of Humanity, or the Start of a New Era?

Is AI the Final Invention of Humanity, or the Start of a New Era?

Is artificial intelligence humanity’s final invention?

A creation so powerful it could eventually replace us, leaving humanity behind like the ruins of once-great civilizations now lost to time?

Since the very beginning of the AI hype, I’ve been obsessed with that question. But I never had the time – or a good method – to actually answer it. Until today.

So I rolled up my sleeves and dove into what I believe is the only reasonable way to search for such answers: by digging into the history. I wanted to understand whether we’re truly entering an entirely new era or if we’re simply circling the same ideas and this is yet another spiral of history.

And what I found? Well, some things blew my mind. A lot confirmed my existing beliefs (of course), but a few discoveries challenged what I thought were absolute truths. It was a perspective-shifting journey. And in this article, I would like to take you on this journey with me.

So, is AI really our final invention? Let’s dive in and uncover what history can teach us about the future! 

FREE Digital Readiness Audit

Find out where your customer strategy is costing you clients and blocking your digital growth.

The Origins of Modern Computing

Pre-AI Foundations: The Mechanization of Information

To understand AI, first, I want to track computing history itself – its origins reach back further than many realize.

In the 1890s, a young inventor named Herman Hollerith faced an enormous challenge. The U.S. Census Bureau was drowning in paper. The 1880 Census had taken a staggering seven years to process manually. With America’s population booming, officials feared the 1890 census might take more than a decade – rendering the data obsolete before analysis was even complete.

Herman’s solution was a revolutionary punch card machine that automated data processing. Each card represented a person, with holes punched in different positions to indicate characteristics like age, gender, or nationality. When placed in his tabulating machine, metal pins would pass through these holes, completing electrical circuits and incrementing mechanical counters.

The result was extraordinary: the 1890 census was completed in just two and a half years instead of the estimated eleven. Hollerith had effectively created the first practical machine for automated data processing. In 1896, he founded the Tabulating Machine Company to commercialize his invention. Through mergers and acquisitions, this company would eventually evolve into what we now know as IBM – International Business Machines. Think about it for a second: it was founded in 1911, though originally named Computing-Tabulating-Recording Company (CTR) and renamed to IBM in 1924, it was a staggering 110 years ago!  For a computer company which is still there. 

For decades, these punch-card systems dominated data processing. But, as is always true in the high-tech industry, things couldn’t stay the same for eternity. And as terrible as this sounds, war has helped progress.

In 1943, the U.S. Army unveiled ENIAC – the Electronic Numerical Integrator and Computer. This massive machine filled a 30 by 50-foot room, used 17,000 vacuum tubes, and consumed enough electricity to power a small town. Yet despite its size, ENIAC could perform thousands of calculations per second, far outpacing any electromechanical predecessor.

By 1946, we saw UNIVAC – the first commercial computer designed for business and government applications rather than military use. When the U.S. Census Bureau installed the first UNIVAC I in 1951, it marked computing’s transition from a military specialty to a business tool.

To really appreciate the scale of technological progress: in 1890, it took about two and a half years to process census data for 62 million people using Hollerith’s punch card machines. By 1960, even though the population had nearly tripled to 180 million, the first population reports were published in just four months.

But despite these breakthroughs, early computers like UNIVAC remained prohibitively expensive, with a price tag of around $1 million in the early 1950s, a figure even more staggering when you account for inflation. That kind of cost meant only government agencies and the largest corporations could afford them, severely limiting how widely computers were adopted.

IBM saw the opportunity. Between 1951 and 1954, they introduced a series of increasingly powerful, business-oriented computers – the IBM 701, 702, 705, and 650. And perhaps most importantly, in 1959, they launched the IBM 1401, which leased for just $2,500 a month. Thanks to this dramatically lower price, the IBM 1401 opened the door for medium-sized businesses to access computing power for the first time.

Early Concerns Over Automation

Yet something else was happening along with the evolution of computing capabilities. As these machines began replacing clerical workers and streamlining business operations, they also triggered the first wave of what we now call “automation anxiety.”

The 1958 recession brought these fears to the forefront. With unemployment rising, the media coined a new term: the “Automation Depression.” Magazine headlines warned of “robots taking jobs” and the “rise of the automatic factory.” Labor leaders testified before Congress about the threats posed by these new electronic brains.

One science news service captured the mood with a striking analogy: “With the advent of the thinking machine, people are beginning to understand how horses felt when Ford invented the Model T.” This vivid image – comparing human workers to horses rendered obsolete by automobiles – reflects a deep-seated fear we still grapple with today in the era of AI.

Yet these fears, while understandable, proved largely unfounded. While certain job categories did shrink – thousands of clerical roles disappeared as companies computerized their accounting departments – other jobs were created. The demand for computer operators, programmers, and systems analysts grew exponentially. Companies often retrained former clerks as programmers or machine operators.

Most significantly, the productivity gains from early automation allowed businesses to expand in new directions, creating different kinds of jobs. General Motors, for example, added hundreds of thousands of jobs in the 1950s even as it automated parts of production because the efficiency gains helped GM grow overall.

This pattern – technology eliminating some jobs while creating others – would repeat itself throughout computing history. It’s a reminder that technological change, while disruptive, rarely plays out as simplistically as our fears suggest.

Early AI: Curiosity and Hype

First Experiments and AI Enthusiasm

The 1950s gave us the hardware of modern computing. The 1960s gave us something even more profound: the dream of machine intelligence.

Between 1965 and 1972, early AI pioneers created systems that seemed almost magical for their time. At MIT, Joseph Weizenbaum built ELIZA, a program that could simulate conversation – most famously as a therapist asking probing questions. While ELIZA used simple pattern-matching rather than true understanding, people became emotionally engaged with it. Some even asked to speak with the computer in private, believing it truly understood them.

Meanwhile, at Stanford Research Institute, researchers created Shakey – the world’s first mobile robot with spatial awareness. Shakey could perceive its surroundings through a camera and range finders, build simple plans to navigate rooms and move objects, and recover from basic errors. In an era when most computers were still room-sized calculators, a machine that could move through physical space and respond to its environment seemed revolutionary.

Another milestone came with Terry Winograd’s SHRDLU system, developed at MIT. SHRDLU operated in a simplified “blocks world” but could understand natural language commands like “put the red block on the green cube.” It could ask clarifying questions and execute instructions within its virtual environment – demonstrating what appeared to be real language understanding.

These early systems captured the imagination of both researchers and the public. Government funding poured in, especially from DARPA, which is the American military research system. Corporate labs at places like IBM and Bell Labs joined the effort. Researchers became boldly optimistic.

Marvin Minsky, one of the field’s founders, reportedly stated in 1967: “Within a generation… the problem of creating ‘artificial intelligence’ will substantially be solved.” Such confidence reflected a fundamental belief that intelligence was primarily a matter of symbolic manipulation – a problem that could be solved with the right algorithms and enough computing power.

The First AI Winter

But the early enthusiasm soon collided with hard reality. By the early 1970s, it became clear that AI’s challenges were vastly more complex than anticipated.

Programs that worked well in simplified environments failed when confronted with the messiness of the real world. Machine translation, which had seemed promising in demonstrations, produced laughably bad results in practice. Robots that functioned in controlled laboratory settings stumbled when faced with unpredictable environments.

The fundamental issue was what researchers called the “combinatorial explosion” – when problems have so many possible solutions that even powerful computers can’t check them all. Early AI systems also lacked common sense knowledge that humans naturally possess.

In 1973, a prominent review of AI research in the UK (known as the Lighthill Report) concluded that AI had failed to achieve its lofty promises and was unlikely to do so anytime soon. This devastating assessment led British funding agencies to slash support for AI projects.

Similar skepticism took hold in the United States, where DARPA dramatically reduced its AI investments. As money dried up, many projects were canceled or scaled back. What had been a flourishing field entered what would later be called the “first AI winter” – a period of reduced funding, diminished expectations, and public disillusionment.

The pattern established here – extraordinary promise followed by disappointment and retrenchment – would become a recurring cycle in AI’s development. But even as general AI ambitions were tempered, researchers found success by narrowing their focus to specific, well-defined problems.

FREE Digital Readiness Audit

Find out where your customer strategy is costing you clients and blocking your digital growth.

The Rise of Expert Systems

Narrow AI Successes

A new approach called “expert systems” gained traction. Instead of attempting to create broadly intelligent machines, researchers focused on capturing human expertise in narrow domains.

One of the first and most successful was MYCIN, developed at Stanford University in the 1970s. MYCIN used a knowledge base of about 600 rules to diagnose blood infections and recommend antibiotics. It worked by asking physicians a series of questions about a patient’s symptoms and test results, then applying its rules to reach a diagnosis. Remarkably, MYCIN often performed at the level of human specialists in this narrow domain and could even explain its reasoning – a crucial feature for building trust.

Other notable systems followed. The Prospector system helped geologists find mineral deposits and famously discovered a significant molybdenum deposit, demonstrating real-world value.

Another important system was XCON (also known as R1), developed at Carnegie Mellon University for the computer company Digital Equipment Corporation (DEC). XCON was created to help configure so-called VAX computer systems, which were powerful and flexible business computers used in the 1970s and 1980s. These systems weren’t sold as simple, ready-to-use machines. Instead, customers would place detailed orders based on their specific needs – for example, how much memory they wanted, how many hard drives, what kind of processors, and what software to include.

It was like giving instructions to someone who has zero context – and follows them so literally, they end up doing something completely absurd. (Think of that meme where the guy tries to spread Nutella on bread based solely on a kid’s vague directions.)

Manually checking all these options to make sure the parts worked together correctly was slow and error-prone. XCON automated the process: it analyzed the customer’s order and figured out how to assemble a working system from the chosen parts. This saved DEC millions of dollars by reducing mistakes, speeding up delivery, and cutting down on support issues.

During that time, important advances were made in teaching computers to recognize images. In 1979, a researcher named Kunihiko Fukushima created something called the Neocognitron. In simple terms, it was a computer system that could identify specific shapes and features in pictures – like edges, curves, or simple objects – which we call “patterns.” The system was inspired by how the human brain processes vision. What made it special was that it could spot these visual patterns anywhere in an image, not just in one specific location. This flexibility was a major breakthrough and became an important foundation for modern AI systems that can “see” images. This early work helped lead to today’s AI technology that can recognize faces, objects, and other visual information.

These successes triggered renewed optimism. By the early 1980s, companies large and small were investing in AI research and development. Japan announced the ambitious Fifth Generation Computer Systems project, aiming to create machines with reasoning capabilities akin to human intelligence. This spurred additional funding in the U.S. and UK to avoid falling behind in what was portrayed as an “AI race.”

Bottlenecks and the Second AI Winter

The expert systems boom reached its peak in the mid-1980s – and then reality set in again.

These systems, despite their initial promise, revealed fundamental limitations: they were brittle – working well within their narrow domains but failing completely when faced with novel situations or edge cases. They were expensive to build and maintain, requiring knowledge engineers to painstakingly interview experts and codify their knowledge as rules. And they couldn’t learn from experience – each new piece of knowledge had to be manually programmed.

As corporations discovered these limitations, enthusiasm waned. The specialized computers that had been built specifically to run AI programs (called “Lisp machines” because they were optimized for the Lisp programming language that AI researchers preferred) became obsolete when regular, cheaper computers became powerful enough to do the same job. Many AI startups of that time went out of business, and corporate AI research labs reduced their goals and funding.

By 1987-1988, the AI industry was in the midst of the “second AI winter” – another period of reduced funding and diminished expectations. The Fifth Generation project fell far short of its goals, and the term “artificial intelligence” itself became somewhat tainted in business circles.

Yet amid this downturn, a significant breakthrough occurred that would later transform the field: Geoffrey Hinton and Ronald Williams introduced backpropagation for training neural networks in 1986.

To understand why it’s important, think of a neural network like a student taking a test. If the student gets the answer wrong, backpropagation is like a teacher telling them exactly which part of their thinking was off, so they can adjust their approach next time. This “teaching feedback” is passed backward through the layers of the network, helping the whole system improve its guesses. Before backpropagation, neural networks didn’t have a clear way to learn from their mistakes, especially when they had many layers. This discovery showed how deep networks could get better by learning step by step, just like people do when they practice and get corrections.

At the time, this advance received limited attention amid the general disillusionment with AI. But it planted very important seeds for the deep learning revolution, which I’ll discuss in a moment.

Modern Foundations (1989–2011)

From Pattern Recognition to Commercial AI

As the 1990s began, AI researchers adopted a more pragmatic approach, focusing on specific problems with clear metrics for success. Pattern recognition emerged as a particularly fruitful area.

In 1989, Yann LeCun and colleagues at AT&T Bell Labs demonstrated that neural networks could effectively recognize handwritten digits – a breakthrough that led to practical applications like automated check processing for banks and zip code reading for mail sorting.

Game-playing AI also made significant progress. Between 1992 and 1997, IBM developed chess-playing systems that led to Deep Blue, the computer that famously defeated world champion Garry Kasparov. This win was made possible by a mix of powerful computers, fast search through many possible moves, and built-in knowledge about how good chess players make decisions. The match became a historic moment in AI, often compared to the moon landing in terms of its cultural impact.

It’s important to understand why games like chess were such a big deal for AI research. Scientists needed environments that shared important features with real-world problems – like having many possible options and requiring strategic thinking – but were much more structured and manageable. Games offered this perfect middle ground: they were complex enough to be challenging but simple enough to have clear rules and goals. Chess, with its effectively infinite-to-a-human-comprehension game positions, presented an enormous but contained challenge that pushed AI capabilities forward while providing a clear way to measure success.

These game-playing systems became stepping stones toward tackling more complex real-world problems.

In the 2000s, AI started moving much faster, thanks to three things coming together: more data, more powerful computers, and smarter algorithms.

People were putting huge amounts of information online, basically preparing the ground for AI systems to learn. At the same time, computers were getting faster and cheaper every year – this followed Moore’s Law, which says that the number of tiny switches (called transistors) on a computer chip doubles about every two years, making computers more powerful over time.

This steady improvement in computer speed gave AI the “muscles” it needed to handle big data and complex problems. A great example of this progress came in 2004, when AI-powered vehicles took part in a U.S. military competition called the DARPA Grand Challenge. For the first time, some self-driving cars were able to navigate a rough desert course without any help from humans. It was a big milestone that pointed the way toward the driverless cars we’re now starting to see on real roads.

Between 2006 and 2009, Netflix ran a contest offering $1 million to anyone who could improve how it recommends movies to users. This challenge helped push forward new ideas in machine learning and how computers can learn from user behavior. Around the same time, in 2006, a huge collection of labeled pictures called ImageNet was created. It gave AI researchers the data they needed to help computers learn to recognize objects in images – something that would become a big breakthrough for computer vision later on.

AI Enters the Public Sphere

By 2011, AI was no longer confined to research labs and specialized applications – it was beginning to enter mainstream awareness.

IBM’s Watson system competed on the TV quiz show Jeopardy! against former champions Ken Jennings and Brad Rutter. Watson’s victory – answering questions about history, literature, pop culture, and science – demonstrated AI’s ability to process natural language and retrieve relevant information from vast knowledge bases.

That same year, Apple introduced Siri on the iPhone 4S – bringing conversational AI to millions of consumers. While Siri’s capabilities were limited and sometimes frustrating, it represented a significant step in making AI a part of everyday life. And remarkably, more than a decade later, Siri still struggles to set a timer without drama.

Deep Learning and the AI Revolution

Breakthroughs in Deep Learning

The period from 2011 to 2016 saw an extraordinary acceleration in AI capabilities, driven primarily by advances in deep learning – the technique of training neural networks with many layers.

Microsoft made significant strides in applying deep learning to speech recognition. By 2016, their system achieved human-level accuracy for the first time – a milestone that many had thought might be decades away. This breakthrough quickly found its way into products like transcription services and voice assistants.

Perhaps even more striking was the work of DeepMind, a London-based startup focused on artificial general intelligence. In 2013, they demonstrated a system that could learn to play classic Atari video games directly from the screen pixels, often reaching superhuman performance. Their agent, called DQN (Deep Q-Network), learned through trial and error, discovering strategies that its creators hadn’t explicitly programmed.

This achievement was remarkable because it showed a single algorithm learning many different tasks without task-specific engineering. The agent figured out how to play Breakout, Space Invaders, and other games using the same learning approach – suggesting a step toward more general intelligence.

Google recognized the significance of this work and acquired DeepMind in 2014 for over $500 million. This acquisition signaled the major tech companies’ recognition that AI was becoming a strategic technology that could reshape their industries.

GANs and Image Generation

In 2014, Ian Goodfellow introduced a new approach called Generative Adversarial Networks (GANs). GANs use two neural networks – a generator and a discriminator – that compete against each other. The generator creates images (or other content), while the discriminator tries to determine whether they’re real or fake. Through this adversarial process, the generator gets better and better at creating realistic content while the discriminator gets better at finding fakes.

This innovation enabled AI to not just recognize patterns but create new content that resembles its training data. Early GANs produced somewhat blurry images, but the technology improved rapidly. By 2018, GANs could generate photorealistic faces of people who didn’t exist, landscapes that were never photographed, and artistic images in the style of any painter.

The ability to generate realistic images, along with advances in other media like text and audio, set the stage for what we now call “generative AI” – systems that can create new content rather than just analyze existing data. Yet the true revolution was yet to come.

FREE Digital Readiness Audit

Find out where your customer strategy is costing you clients and blocking your digital growth.

Game Changer: The Transformer

Transformers and Language Models

Speaking of revolutions. In 2017, Google researchers published a paper titled “Attention Is All You Need,” introducing a new neural network architecture called the Transformer. This seemingly technical and rather evolutionary advance would prove revolutionary, particularly for natural language processing.

Earlier language models had to read text one word at a time, like someone reading slowly from left to right. This made them slow and not very good at remembering words that came much earlier in a sentence. Transformers changed that. They could look at all the words in a sentence at once, which made them much faster to train and much better at understanding the full meaning of a sentence, even when important words were far apart.

Because of this new design, Transformers made it possible to build a whole new generation of powerful language models. These models were trained on huge amounts of text – books, websites, and more – and learned to do one main thing: predict the next word in a sentence. That might sound simple, but to do it well, the model has to “pick up on” grammar, facts, common sense, and even logic, just by reading tons of examples. It doesn’t really understand the world like a human does, but it gets very good at guessing what comes next based on patterns it has seen.

Why are Transformers so important? Because they’re fundamentally predictive in nature. Think of them as similar to the T9 predictive text on old mobile phones, but vastly more powerful. When you typed on a T9 keypad, the phone would guess which word you meant based on the most common words that matched those key presses. Modern language models do something similar, but at an enormous scale and with much more context.

This predictive nature is key to understanding what these models are and aren’t. They’re not conscious or truly reasoning – they’re making sophisticated predictions based on patterns in their training data. They can fake logic and conversation remarkably well, but they’re still fundamentally text prediction engines rather than thinking entities.

Reality Check: What AI Is and Isn’t

Debunking the Myth of Conscious AI

This brings us to a crucial point: despite their impressive capabilities, today’s AI systems are not conscious, sentient, or genuinely intelligent in the human sense.

Large language models like GPT-4 or Claude can write poetry, explain complex topics, and even engage in philosophical discussions that seem deeply thoughtful. But these systems aren’t actually thinking or feeling – they’re mathematical prediction machines generating text based on statistical patterns.

When a language model writes an essay about the meaning of life or expresses an opinion about politics, it isn’t sharing its beliefs – it has none. It’s producing text that statistically resembles human writing on those topics. The appearance of understanding is so convincing that it’s easy to anthropomorphize these systems – in other words, to treat their output as if it came from a thinking, feeling human being, when it’s actually just mimicking patterns found in human-created text. But doing so fundamentally misunderstands what they are.

General AI vs. Specialized AI

This distinction leads us to the difference between general and specialized AI.

General Artificial Intelligence – sometimes called Artificial General Intelligence (AGI) – would possess the ability to understand, learn, and apply knowledge across a wide range of tasks, similar to human intelligence. It would have true comprehension and the ability to reason about novel situations. Despite decades of research, we do not yet have AGI, and experts disagree about how far away it might be.

What we do have are increasingly capable specialized AI systems. Modern models excel at specific domains like language processing, image recognition, or game playing. They can even perform impressively across multiple domains – a language model might write an essay, then solve a math problem, then generate computer code. But this versatility comes from massive training on diverse data, not from general intelligence or understanding.

When we mistake specialized AI for general intelligence, we risk both overestimating its capabilities – assuming it truly understands concepts – and underestimating the immense challenges involved in building true AGI.

So, where does that leave us?

Are we truly witnessing the rise of a superintelligent AI destined to destroy humanity?

Well – as you might have guessed – I don’t think so. At least, not in its current form.

What we’re seeing today is not the dawn of artificial general intelligence, but rather a powerful extension of computerized reasoning – an evolution of ideas that date back to the 1960s.

Yes, it’s faster, more polished, and available at scale – but conceptually, it’s still built on the same foundation: using patterns in data to make predictions.

Will this wave of AI change our lives?

Undoubtedly – it already has. And there’s much more to come.

But if history is any guide, this boom – fueled by large language models and text-based agents – may not last forever. Unless we make a breakthrough beyond current techniques, we may soon enter yet another “AI winter,” much like the cycles we’ve seen in the past. The hype fades, funding shrinks, and reality sets in.

Still, we don’t need AGI to see a real impact today.

And since this article focuses on business, let’s shift our attention to what really matters right now: 

What does today’s AI mean for companies and how is it already reshaping the business world?

Real-World Applications of AI

Enterprise and Industrial Use

In enterprise settings, AI systems are parsing and analyzing internal documents – reducing the time and error rate for tasks that once required significant human effort. Insurance companies use AI to extract information from claims forms. Legal firms employ it to search through thousands of documents for relevant case information. Healthcare organizations use it to summarize patient records and assist with medical coding.

AI is also revolutionizing customer intelligence. Retailers and service providers analyze purchasing patterns to predict customer needs and personalize offerings. Recommendation engines – whether for Netflix movies, Amazon products, or Spotify songs – use AI to suggest items based on your past behavior and the preferences of similar users.

AI for Consumers

For everyday consumers, AI has moved from an elite technology to an everyday presence. The most obvious examples are the various digital assistants – Siri, Alexa, Google Assistant – that respond to voice commands in millions of homes and phones. Well, maybe not Siri, but you get the point.

E-commerce has been transformed by AI-powered personalization. When you shop online, the products you see, the order they appear in, and even the prices you’re offered may be influenced by AI systems analyzing your browsing and purchase history.

Customer service is increasingly handled by AI chatbots that can interface with internal systems. When you contact a company about a return or to track a package, you might interact with an AI that can access your order information, process your request, and resolve your issue without human intervention.

And of course, generative AI tools like ChatGPT, Claude, and Midjourney are now accessible to anyone with an internet connection, allowing people to generate text, images, and other content with simple prompts.

AI for Personal Productivity and Education

The most powerful aspect of modern AI may be its ability to augment human capabilities – particularly in productivity and learning.

AI can now help write emails, summarize lengthy documents, and generate reports. It can translate languages in real-time, making global communication far easier. My mother, for example, now can surf the internet, send emails to her peers, and talk to people she could never reach without such tools.

For entrepreneurs and small businesses, AI can perform tasks that once required specialized staff or services – from creating marketing copy to analyzing customer feedback.

In education, AI is transforming how we learn complex topics. Students can get instant explanations tailored to their level of understanding. Professionals can quickly get up to speed on new subjects without wading through textbooks or courses. And those with specialized knowledge can use AI to communicate their expertise more effectively to non-specialists.

I’ve experienced this personally while writing this very article. ChatGPT helped me organize research, suggest historical connections I hadn’t considered, and refine my explanations of technical concepts. It saved me countless hours of research and revision. The key point is that I remained in control – asking questions, evaluating responses, and making final decisions about content and framing.

This collaborative potential may be where AI delivers its greatest value: not by replacing human creativity and judgment, but by amplifying them.

Let me know what you think in the comments below.

Conclusion

On that note, thanks for reading. If you’re looking for a technology partner for your business, feel free to reach out to me. I’ll be glad to connect! 

And stay tuned for my next blog posts, where I’ll be sharing more interesting insights and discoveries.

System Thinker, Technology Evangelist, and Humanist, Jeff, brings a unique blend of experience, insight, and humanity to every piece. With eight years in the trenches as a sales representative and later transitioning into a consultant role, Jeff has mastered the art of distilling complex concepts into digestible, compelling narratives. Journeying across the globe, he continues to curate an eclectic tapestry of knowledge, piecing together insights from diverse cultures, industries, and fields. His writings are a testament to his continuous pursuit of learning and understanding—bridging the gap between technology, systems thinking, and our shared human experience.

Leave a Reply

Your email address will not be published. Required fields are marked *

Building better client relationships through systems and automation.

© 2026 · Muncly · All rights reserved · Any reproduction or copy should be followed by a DOFOLLOW link to this website.