<![CDATA[Human Made]]>https://humanmade.fyi/https://humanmade.fyi/favicon.pngHuman Madehttps://humanmade.fyi/Ghost 5.118Wed, 30 Apr 2025 12:14:38 GMT60<![CDATA[Does ChatGPT-Generated Text Hurt Your SEO?]]>https://humanmade.fyi/chatgpt-generated-text-seo/67d19b286f8c39000169d188Thu, 03 Apr 2025 12:19:59 GMT

Instead of focusing on how content is created, there is a better way to look at the issue of AI-generated content.

  • What is the purpose of the content? Is it the main focus or a sidenote?
  • How much effort did you put in to creating it, and does it match the expectation of the reader?
  • How much generated content is there in total?
  • Did you bother to read the content before using it?
  • Did you bother to read this before creating it?

There are times when AI-generated content is perfectly fine to use, although I believe there are more times when it isn't.

Content Effort

In its guidance on Google search and AI content, Google says:

Google's ranking systems aim to reward original, high-quality content that demonstrates qualities of what we call E-E-A-T: expertise, experience, authoritativeness, and trustworthiness.

An LLM has no experience, no expertise, no authority, and its answers can't be trusted to be factual.

So by definition, it doesn't meet the quality threshold unless there is a human involved in editing and adjusting it.

Then the article says (emphasis mine):

Our focus on the quality of content, rather than how content is produced, is a useful guide that has helped us deliver reliable, high quality results to users for years.

In other words, crap content is out, generated or not.

(And, yeah, I know. Google's very good at ranking crap content. I think we need to take these notes as a sign of where it wants to go, not where it is today.)

We can also look at the Quality Raters Guidelines. And for that, I will switch to talking about the level of effort because that's one way Google measures quality.

It says:

The quality of the [main content] can be determined by the amount of effort, originality, and talent or skill that went into the creation of the content.

Under 'effort' it says:

Consider the extent to which a human being actively worked to create satisfying content.
Does ChatGPT-Generated Text Hurt Your SEO?

And in section 4.6.6:

The Lowest rating applies if all or almost all of the [main content] on the page... is copied, paraphrased, embedded, auto or AI generated, or reposted from other sources with little to no effort, little to no originality, and little to no added value for visitors to the website.

I could go on. But I think we know what Google is saying: there is a fine line between LLMs being useful and harmful, and that's the line it's trying to describe.

Is ChatGPT-Generated Content "Helpful"?

At the risk of bringing everyone out in hives, let's think about "helpful content".

In some cases, content generated by an LLM it can be helpful. Let's think about:

  • Ecommerce product pages
  • Landing pages
  • Data tables or charts
  • Forms
  • Technical documentation
  • Page metadata

My view is that some of this content can be AI-generated.

For example:

  • I don’t think there’s anything wrong with producing AI-generated content in a programmatic SEO project if it's done with good intentions. That's assuming the content is fairly short and informational, and it's not the "MC" - the "main content" on the page.
  • If you need a description of a product, location, service, you can use a tool like GPT for Sheets to knock that out. The results need to be edited because they will always be wrong. But it'll save some time. And, to be honest, I've never met a writer who was looking forward to filling in 2,000 rows on a spreadsheet. I did it right at the start of my writing career, and I never want to do it again.
  • Who reads meta descriptions? Nobody. Not even Google. If you want to generate them, I won't fight you on that.
  • Alt text: same. I'll outsource it.
  • Charts: I'm not a designer, so I'll use an LLM in a pinch.

All of this is secondary to writing.

So what shouldn't be generated?

  • Anything you want humans to read, trust, enjoy, share, and learn from.
  • Anything piece of content designed to sell something.

I would rather poke myself in the eye with a sharp stick than read an AI-generated blog post. So there's your line.

Is Google Able to Detect "Helpful" Content?

No, I don't think so.

The Helpful Content Update caused people to lose their jobs and livelihoods. Not only did Google seem to underestimate the HCU's impact, it seemed to be unable to get the situation into control, or even understand what went wrong.

In October 2024, it invited a small group of content creators to its HQ to discuss the HCU. This quote from Pandu Nayak says a lot:

“I suspect there is a lot of great content you guys are creating that we are not surfacing to our users, but I can't give you any guarantees unfortunately."

Danny Sullivan allegedly said:

"There’s nothing wrong with your sites, it’s us."
"Your content was not the issue."

Wait.

The Helpful Content Update destroyed the site, but the content wasn't the problem?

There's more context in this excellent blog post:

Google’s elderly Chief Search Scientist answered, without an ounce of pity or concern, that there would be updates but he didn’t know when they’d happen or what they’d do. Further questions on the subject were met with indifference as if he didn’t understand why we cared.

My interpretation is that high-quality, high-effort content should rank consistently, but it doesn't. The HCU overshot massively and Google hasn't pulled it back enough yet.

What Google says it wants and what it appears to reward are not the same thing. That's the painful truth. And chasing what works is usually more attractive in the short term.

Long term, I think Google will keep adjusting during core updates to try to do a better job of aligning rankings with guidelines, which it is not doing yet.

Is Publishing ChatGPT-Generated Text Risky?

Yes, I believe so.

The days of lookalike affiliate marketing content are gone. Clients still want it, in some cases, so I'm sure we'll continue to produce it if we can be paid for it.

But I think good writers will move on to a much more satisfying type of content: high-quality content with EEAT.

Here's why I'm convinced.

1. Information Gain

Google says in black and white that it wants to see original content at the top of the SERPs.

LLMs produce unoriginal content.

Information goes in, information comes out in a different order. And not only that, but the information that is spat out is frequently wrong.

Let's pause to think how often LLMs are wrong. SimpleQA is designed to elicit hallucinations with questions that are much more difficult than typical prompts, but this is still an eye-opener:

Does ChatGPT-Generated Text Hurt Your SEO?

And this is still true, in my experience:

Currently, the only way to reliably use LLMs is to know the answer to the question before you ask it.

So for good information that is actually accurate, you need a human involved in verifying information, expanding it, and improving quality. There is no way around it.

There should be something in every blog post that is new, surprising, innovative, or supported by unique personal experience.

2. The Tidal Wave Effect

Many companies use LLMs to produce much content as possible as quickly as possible. I have heard CEOs of large marketing companies boast about scraping People Also Ask questions and publishing 100 pages of content per day to rank for them.

The updated Quality Raters' Guidelines warn against this.

I already wrote about the fact that Google is bad at detecting slop in search results, even though the patterns are obvious to you and me.

It may get better at detecting it. I don't think it's too difficult to pick out the word patterns.

When it does... will the content be reclassified as scaled content abuse? Spam?

Going back to Google's guidance, it says:

Using automation—including AI—to generate content with the primary purpose of manipulating ranking in search results is a violation of our spam policies.

So, again, the days of cheap content might not last forever.

If it works now, I don't think it will for long.

3. The Reputational Damage

Osama wrote about a study that indicates that humans don’t mind AI-generated content until you tell them it’s AI

What if they can already tell?

I pushed for a couple of years to have author bios added to a client site. Not because they're good for SEO. (At the time I started pushing for them, nobody cared.)

I wanted them because they build trust.

I want to know whose advice I'm following and why I should believe them.

Now think about what happens if you generate your author bio with ChatGPT. It instantly discredits the author and all the content under their name.

Ahh. The bio is BS, so all the articles are as well. Makes sense.

Everything I've said so far assumes that traditional Google search results still exist in two years' time.

Now, Google is still the dominant player in search, so I think it's highly likely that organic results will disappear.

But the intent of someone searching Google vs searching ChatGPT might be totally different. The conversion rates might be different as well.

Semantic search is important for AI Overviews, LLM search, and AI Mode. So by incorporating semantic keywords, hiring good writers, and prioritising quality, you'll have a better chance of future-proofing against the changes that are coming.

Augment With AI, but Don't Replace

I use LLMs for various tasks in my day-to-day work. I'm OK with using LLMs for small content tasks, productivity, and reducing the load on writers.

I draw the line at ChatGPT-generated blog posts. I draw the line at low effort content. And I believe that publishing it is a poor investment.

]]>
<![CDATA[Is AI Becoming Human or Are We Turning Into Machines?]]>Technology was never the problem.

A year or so ago, I was having dinner with a friend when the subject of my writing came up, and so the inevitable question followed: "Aren't you afraid that AI will be the end of you?”

Alice in Wonderland facepalming
]]>
https://humanmade.fyi/is-ai-becoming-human-or-are-we-turning-into-machines/67dd48f23383ab00018cc2f3Fri, 21 Mar 2025 11:53:08 GMT

Technology was never the problem.

A year or so ago, I was having dinner with a friend when the subject of my writing came up, and so the inevitable question followed: "Aren't you afraid that AI will be the end of you?”

Is AI Becoming Human or Are We Turning Into Machines?
Alice in Wonderland facepalming

Some assume AI is "better" because it can produce something faster, cleaner, and more efficient than a human ever could — a flawlessly structured accentual-syllabic verse, an image sharper and more polished than even the most skilled Photoshop master could create.

Perfect, spotless, inhumane.

What It Means to Be Human

Setting aside the glaring misunderstanding of the purpose of art, this kind of thinking exposes something far more unsettling: how little regard our own humanity.

I wasn’t an A-grade science student, but hasn’t every major evolutionary leap stemmed from a mutation, a deviation — ultimately, an error? To be human is to be flawed. To be alive is to be imperfect. Or, seen from another angle, we are alive because we are flawed and imperfect.

Perfection doesn’t — nay, cannot — exist in nature because it assumes there’s only one right way of doing things. It implies that everyone and everything should strive for the same standard.

And for the love of me — isn’t that terrifying?

Besides, who exactly are we supposed to be striving to become? Everyday Joe? Nietzsche? Kim Kardashian? In a world of 8 billion unique stories, how do you even define "perfect"?

More importantly, why would you want to?

AI Isn’t Replacing Us, We’re Replacing Ourselves

While public debate fixates on whether AI will take over human roles, a far more insidious shift is unfolding in the background.

Another dinner, another story. Just two months ago, I was visiting friends in the sunny Canary Islands when, inevitably, the topic of AI came up. Of course, it did.
I felt like Bridget Jones at a dinner party full of married couples, awkwardly fielding questions about her spinsterhood.

Is AI Becoming Human or Are We Turning Into Machines?
A group of married couples having dinner, looking at single Bridget Jones expectantly

I explained that not only do I approve of AI, but I use it frequently. Then, added that writing a 2,000-word article without AI takes me just as long as writing one with it. The only difference is that the AI-assisted version has better-researched references, and more polished grammar, which makes my editor’s life easier.

Silence.

Everyone stared at me, wide-eyed, as if I had just started speaking in my native language without realizing it. Finally, one of them broke the silence:

"But… how can it take you so long to write with AI? You just send the prompt and copy-paste the result."

And that’s when I first learned that people aren’t reviewing whatever AI spits out.

My jaw hit the floor.

Is AI Becoming Human or Are We Turning Into Machines?
A skeleton police officer from Coco dropping his jaw

We used to fear that machines would overturn governments, take over the planet, and rule us with their iron fists (just look at the Terminator saga). But, the way I see it, the real crisis isn’t that AI is becoming more human. It’s that we’re starting to hold ourselves to machine-like standards.

We mold ourselves to AI-generated beauty standards and nod along to algorithm-approved opinions, reinforcing the degradation of critical thinking — not out of coercion, but out of convenience.

Aldous Huxley must be turning in his grave.

The real fear isn’t that AI will surpass us. The real fear is that we’ll willingly surrender our original thoughts, convinced that imperfection is a glitch to be fixed and that any form of discomfort is an existential threat (hello, cancel culture).

The real fear is that we will start measuring our humanness through the lens of machines.

And in many ways we already did.

When We Became “Human Doings”

In our quest to achieve more, we’ve somehow become less.

When I sold productivity suites for a big tech company, I had reservations about how things were done. I didn’t approve of the hamster-wheel work ethic, and I saw colleagues’ personal lives fall apart under the weight of work addiction.

But the real turning point came when I realized these tools weren’t meant to give people back their time so they could work less and enjoy life more. They existed to cram more into their day, to push them further, faster — straight into burnout.

Since then, I’ve been watching from the sidelines, seeing how a productivity-obsessed world shapes not just how we see ourselves, but how we speak about ourselves.

We Talk About Ourselves Like Machines

There was a time when people got tired. Now, we run out of battery.

If my WWII-generation grandmother heard me say that, she’d think I was speaking in code. Language evolves, sure. I’d never give up saying “Google it” — who even says “browsing” anymore? But have you noticed how the words we use to describe ourselves sound less human and more mechanical?

Research is very clear on the correlation between the language we use and how we see ourselves[1] — and lately, we’ve been talking about ourselves as if we were designed for efficiency, not existence.

We no longer need to rest, we recharge.

We no longer take a break, we unplug.

We’re no longer overwhelmed, we don’t have the bandwidth.

Because in a world that values output over well-being, why rest when you can power down? Why rest at all?

We Treat Rest as Inconvenience

Humans have turned rest into a moral failing.

We feel guilty for doing nothing: “I just need to keep busy.” When we fall ill, we need to be “up and running” ASAP. And when we do need to rest (ugh 🙄), we must maximize the downtime for peak recovery.

If you think this is an exaggeration, just look at how little time we allow women to recover after childbirth. What could possibly justify separating a mother from her newborn just weeks after delivery if not the belief that a human's worth is tied to productivity?

We Worship Efficiency Over Everything

Newton discovered gravity beneath an apple tree, not while optimizing his calendar.

But when “efficiency” becomes the only metric that matters, deep work and creative wandering lose out to five-step productivity YouTube videos.
Studies show that organizations prioritizing predictability and output above all else actually stifle the open-ended exploration crucial to genuine innovation and long-term engagement.[2]

Progress paradoxically requires inefficiency.

By striving to be perfectly efficient, we're redesigning ourselves to compete on metrics where AI will always win.

We’ve Made Ourselves Replaceable

Ironically, we are designing ourselves to be outperformed by AI.

A publishing friend confided that her company uses AI to screen manuscripts because it's more efficient. AI processes hundreds while humans do ten.

What about the weird, wonderful books that don't fit categories? García Márquez? Bukowski? Hemingway?

Speaking of Hemingway, I read in Everybody Writes that some of his work was once put through the Hemingway App, a tool designed to simplify prose for better readability. He failed.[3] The very writer after whom the app was named was deemed too complex by an algorithm. How do you like them apples?

Is AI Becoming Human or Are We Turning Into Machines?
Confused cartoon bunny

Marketability, trendiness, and similarity are the KPIs of today's world. And if we measure our worth by output and speed, we've already lost the race.

But in our rush to optimize everything, we've forgotten to ask the essential question.

The Question We're Not Asking

We reply-to-all, and refresh our inboxes like it’s a sacred ritual. But… what’s the point of it all?

Is the pinnacle of human existence really measured in content output, inbox zero, or conversion rates? Everyone's entitled to their own opinion, but I believe we are made for more than that.

I've always been a bit idealistic, but I'm not naive. The threat of AI replacing jobs is real. But what scares me more is that in our rush to compete with machines, we're willingly abandoning the very qualities that make us irreplaceably human — empathy, intuition, lived experience.

Productivity and efficiency aren't inherently bad. And neither is AI. AI is nothing but a tool. A tool that's supposed to free your time to pursue what truly matters, like smelling flowers or watching clouds drift across the sky. Yet somehow, we've twisted these tools into chains.

So the question isn't whether AI will replace us. It's whether we'll remember what we were being replaced from in the first place.

Is AI Becoming Human or Are We Turning Into Machines?

Reclaiming Our Humanity

AI doesn’t compete with us. We compete with it.

Let that sink in.

I believe in a future where we all sit beneath our own vines, free from spreadsheets and data analysis, tasks suited for machines.

So let’s drop this absurd habit of measuring our worth by machine standards. Instead, let’s celebrate the qualities AI will never replicate, like:

- Crying at that scene in The Lion King.
- Meme-ing.
- Instinctively knowing your friend needs to talk just by the sequence of her Instagram stories.
- Understanding that when your mom says she doesn’t need help in the kitchen, she absolutely, 100% does need help in the kitchen

The future doesn’t belong to those who perfect imitation. It belongs to those who unapologetically embrace their humanity. In a world drowning in AI-generated noise, real human expression is priceless.

The rise of AI isn’t a threat. It’s an invitation to rediscover what it truly means to be human.

If we dare.

Is AI Becoming Human or Are We Turning Into Machines?
Men in Black III scene where K is walking along a dock saying, "Oh yeah, It's worth it... If you're strong enough!"
]]>
<![CDATA[Perfection Is the Enemy]]>https://humanmade.fyi/perfection-is-the-enemy/67d073796f8c39000169d10cWed, 12 Mar 2025 12:52:06 GMT

Tools and machines are designed for perfection (at least until they break down). I naturally carried that expectation to AI tools when they first arrived and shook the internet.

At first, it was nothing short of magic. It was hard to think of something ChatGPT couldn’t do.

But after the initial fascination wore off, its flaws became painfully obvious in all their glory.

For AI to be most useful to me, I needed it to be near-perfect in all the ways it was not. I learned that it hallucinates. Gives bad advice with the confidence of a scam artist. So using it for reliable research is often out of the question.

Perfection Is the Enemy

And its ability to count is worse than a dumb JavaScript code bookmarklet I use to count the length of a piece of text when analyzing pages for SEO.

Yet, there’s an interesting contradiction with AI. It's perfect in all the ways I don’t need it to be.

AI excels at structure and conformity to its underlying rules.

This means when generating content, it will never make grammatical mistakes or use the wrong verb tenses. Or use inconsistent parallelism. Or make spelling errors. And it loves punctuation.

God it loves punctuation...

(As much as I appreciate a well-punctuated piece of text, it’s hard to match the pedantry with which AI often applies punctuation to text).

Perfection Is the Enemy

It’s this invariable adherence to writing conventions and grammatical correctness that makes AI devoid of character and personality that the best writers breathe into their writing.

Real writers use different sentence structures throughout a piece.

They vary their sentence lengths, letting ideas dictate the complexity of their language rather than following set instructions without fail.

They relate personal anecdotes and find layers of meaning in the ordinary, told compellingly through their unique lens. They're, first and foremost, thinkers.

Great writers break rules.

Human or AI: Does It Even Matter?

We’re naturally developing an intuition for detecting AI-written vs human content.

I strongly suspect that the quirks, imperfections, and the looser degree of uniformity that I alluded to earlier is how most people determine if the content has a thinking human mind behind it. 

But the real question is, does it even matter if something is AI-written or not? 

Is quality of content the only deciding factor or does origin also play a subtle role?

Perfection Is the Enemy

Consumer perceptions about AI are mixed and nuanced at this stage. In the paper Human Favoritism, Not AI Aversion, researchers evaluated how people perceive human-written ad content vs AI-generated content.

The result? People actually preferred AI-generated ad content.

Well, there goes my entire argument. All the human-ness of content amounts to nothing if people aren’t even liking it.

The consumer is the ultimate judge. And the verdict isn’t looking good for human writers.

There is, however, a very important catch that the researchers also discovered.

While people preferred AI content when they weren’t told if AI was involved, people rated the quality of human-written content higher when they knew its source.

In other words, consumers showed a bias toward content produced by human experts. At the same time, they had no aversion to AI content either.

Perfection Is the Enemy

A Content Reset Is Inevitable

Based on results of most studies done so far on the subject, it doesn’t look like people have a strong preference for human made content. Yet.

It’s hard to forget that AI is still fairly new. ChatGPT only became publicly accessible at the end of 2022. Any conclusions we may draw from surveys today are premature at best.

In fact, I think there's a solid chance the trends observed in these surveys might completely reverse in a few years.

Here's why:

At this point, apart from regular AI users, the world isn’t as familiar with AI and its patterns as more experienced practitioners are. 

Once this exposure increases, I have a strong feeling that the average person’s reactions to AI content will become much less enthusiastic.

Perfection Is the Enemy
The Office meme. Guy smiling after subscribing to a seemingly interesting newsletter. Freaks out when a girl appears next to him with "In today's digital landscape", representing AI clichés in writing

The monotony of AI content is already becoming hard to ignore.

Secondly, the internet is witnessing an enormous content explosion. We’re producing content much faster than humanly possible, and it’s only going to keep accelerating.

With this oversupply of AI slop, the web is well on its way to get saturated to grotesque proportions. 

Recently, I found it difficult to tell apart two separate newsletter emails from completely different brands. The pattern and overpolished perfections were unmistakable, drowning any individual voice they had of their own.

Perfection Is the Enemy

As my friend Claire noted in her insightful piece, Patterns Will Save Us In The End, we’re heading into a snake eating its own tail situation, with slop snowballing into a Big AI Slop Catastrophe. 

Continued overreliance on AI beyond that point is where originality will die out. Unless the human touch makes a comeback. 

So hold onto your writing quirks and imperfections; there’s no greater impetus for originality.

The propensity for error and human weaknesses might become the most sought-after skills in marketing sooner than we realize.

]]>
<![CDATA[Patterns Will Save Us In The End]]>Google can detect AI-generated content using a system called SynthID.

It does this by adding patterns to the text it generates.

Example of watermarks from Google DeepMind

SynthID can then detect those patterns, allowing it to flag AI-generated content.

SynthID watermarking and identification

SynthID isn't just for Gemini. It has been open source since October 2024, and you

]]>
https://humanmade.fyi/patterns-will-save-us-in-the-end/67cac6c7e1ce34000128f4ceFri, 07 Mar 2025 12:34:45 GMT

Google can detect AI-generated content using a system called SynthID.

It does this by adding patterns to the text it generates.

Patterns Will Save Us In The End

SynthID can then detect those patterns, allowing it to flag AI-generated content.

Patterns Will Save Us In The End

SynthID isn't just for Gemini. It has been open source since October 2024, and you can try it on Hugging Face.

Right now, I don't think Google can detect AI-generated content in the SERPs.

But I think SynthID gives us clues as to how it might do that in the future.

And as human readers, I think patterns allow us to 'detect' AI-generated content on our own. Automating this pattern recognition would be the next logical step.

Repeat, Repeat

Do you have a preference for a particular LLM based on the way it formats its responses?

Patterns Will Save Us In The End

(When Claude 3.5 Sonnet was replaced with 3.5 New, I could 'feel' the change before Anthropic announced it.)

Next example. Maybe you find some quirks of LLMs annoying?

Patterns Will Save Us In The End

(Ugh, thanks.)

Researchers found that there are clear patterns in the way LLMs respond. And it's possible to guess which LLM was used based on the patterns alone.

Patterns Will Save Us In The End

"Idiosyncrasies in Large Language Models" is not the first research paper to explore this. Here's another one, appropriately titled 'Why Does ChatGPT "Delve" So Much?'

Patterns Will Save Us In The End

Here's another one that talks about "excess vocabulary". This is focused on the overuse of some words like "delve", "crucial", "showcasing", and "underscores".

Patterns Will Save Us In The End

And I bet you have spotted slop in the wild.

Patterns Will Save Us In The End

You just need to know what to look for.

Then you see it everywhere.

Patterns Will Save Us In The End
Patterns Will Save Us In The End

Google could detect this now. I daresay it wouldn't even need SynthID.

We just need it to be better at picking up on the patterns that are already there.

Time to EEAT

If you think it doesn’t matter to Google whether content is AI-generated or not, I have two responses.

  1. Google says it wants to reward "original, high-quality" content. By definition, content generated by a large language model cannot be original.
  2. When does AI-generated content become spam?

A few folks have skirted a little too close to the sun while trying to find out.

Patterns Will Save Us In The End

I feel that the push for EEAT is the single best thing that has happened in content production in my entire 15-year career.

Good writers write in their own voice, and from their own experience. This is all stuff they are naturally good at, and want to do.

But I think it would be naive to think that Google is pushing EEAT solely because it wants to make the internet a better place. (Although I'm sure there are people working for Google who genuinely want that.)

Google also needs fresh content to prevent misinformation.

To make features like AI Overviews better (and goodness, they need to be better), Google needs a fresh supply of human-made content. It makes sense that it also needs to separate the human-written stuff from the slop.

Otherwise, the snake eats its own tail. New LLMs have to train on slop, and the inaccuracies are exaggerated further.

Sure, AI-generated content can meet the definition of being "helpful" in some cases. But that's not a terribly high bar to meet.

Human Content Will Make a Comeback

I believe relying on AI to generate content at scale will bite many businesses on the backside.

It's just a matter of time.

Many writers and editors are struggling to find work right now. I believe that we will see a resurgence in demand once the spam stops ranking.

And it will, once these patterns are detected at scale.

I'm not against LLMs in content production. I use Claude almost every day for processing data and, ironically, detecting patterns in text. It's a tool, like Google Sheets. I'm supportive of people who want to use it.

Does that mean I think an LLM can replace your content writer?

Well, put it this way: would you fire your data analyst because you bought a calculator?

A bunch of freelance writers, editors, and SEO professionals got together to create Human Made. We're here to prove that human-written content is content that people actually want to read. And we're here for the resurgence.

]]>