top of page

YouTube Family Channels:
The New Frontier of Child Labor

reid_image.png

Dylan Reid Miller

People browsing YouTube can no longer see the twenty-seven videos made by “family vloggers” Myka and James Stauffer documenting the process of adopting a disabled, autistic Chinese child named Huxley in 2016 and 2017, nor any of the videos featuring Huxley leading up to his disappearance from YouTube content three years later. Though Myka Stauffer had promised her viewers that “no matter what state he came to us, that we would love him” and that Huxley “is not returnable” (Moscatello 2020), the couple announced in May 2020 that they had rehomed their son. The fallout destroyed their channel, which has since been deleted. They bled subscribers and sponsors, and neither has returned to the public eye; Myka Stauffer has not posted on her Instagram since her apology on June 24, 2020.

The Stauffers had run their channel since 2014, but it was Huxley’s story that drew people in:

China definitely was the point where we had a super-loyal fan base following us, and they were always asking, ‘Where’s
the next content? Where’s the next piece of this?’ [James] continued. We weren’t daily at that point. We had slowed down
to probably three times a week, and they were asking, ‘Hey, can you do daily? Can you do some more content? We want to

see how Huxley’s doing, how the family is getting along.’ It was like, Wow, okay, we have something here, and people are

really interested in our journey. (Moscatello 2020; italics in original)

Their “Gotcha Day” video, a vlog about going to China to bring Huxley back to the states, remained in a featured position on their channel until at least November 2019, two years after it was posted, racking up at least 4.5 million views (see Figure 1).

reid_fig1.jpg

Figure One: web.archive.org capture of the homepage of Myka Stauffer’s primary YouTube channel on November 14, 2019, with a video from two years ago titled “Huxley’s EMOTIONAL Adoption VIDEO!! GOTCHA DAY China Adoption” placed in the featured video spot. It received 4,533,783 views.

They are far from the only family publicizing their children’s lives for an online audience. The family vlogger industry draws in viewers in the millions, with channels like The Ace Family boasting upwards of 17 million subscribers for content like “OUR NEW DOG BIT OUR BABY! **SCARY MOMENT**.” Nor are the Stauffers the only couple who attempted to build their brand on an adoption story out of Asia; in 2018, another creator couple named Nikki and Dan Phillippi abandoned their plans to adopt from Thailand after discovering the country’s regulations would prevent them from posting anything about their new child online for a year after the process was finalized (NikkiPhillippi 2018). The Phillippis’s story speaks to uncomfortable truths about the family vlogging industry on YouTube: it is dependent on the unregulated labor of children, and the site and its advertisers are complicit in the practice through the functionality of the Search & Discovery algorithm, which suggests new content to viewers on the home page, in the sidebar, and in thumbnails that overlay the video player after a chosen video finishes if autoplay is off, referred to from here on as “the algorithm.”

 

Though Marxist scholars have analyzed both private sphere labor such as housework and sex work, as well as new methods of labor exploitation enabled by social media, very little scholarship to date has analyzed how these two issues intersect when it comes to children’s work online. Applying these frameworks can illuminate how even the most apparently harmless family channel content can be exploitative, transforming play into labor sold as authentic play for the profit of both the guardian uploading content and the platform that hosts it. The algorithmic suppression of controversial topics, regardless of context, for fear of losing advertisers sitewide and the broad dispersal of “harmless” longform content produced by family channels, which can be monetized and can contain multiple midroll ads, lay responsibility for the practices of family channels at the feet of YouTube and its advertisers for the continuous creation and expansion of the market for these channels.

 

When discussing technology companies like Google, which owns YouTube, an important consideration must be made for the black box nature of the algorithm; “black box algorithm” is a term used in machine learning to refer to systems in which the inputs and outputs of a process can be observed, but exactly how one becomes the other is unknown, even by the developers. Much has been written about how black box algorithms serve to obscure risky or dangerous decisions made in the financial and tech industries. In The Black Box Society: The Secret Algorithms That Control Money and Information (2015), Frank Pasquale argues that these algorithms are now making decisions that impact millions of people every day in life-changing ways, and he argues that the inability of the public to access critical features of those decision-making systems is a danger to society, as we are unable to identify and correct any “faulty data, invalid assumptions, and defective models” that lurk in black boxes (18). This issue with the YouTube algorithm has cropped up over the years, as the lack of communication on the site’s part has often led to conflict with creators when implemented changes appear to reinforce bigoted public bias (see the following section). Unfortunately, this shroud of secrecy also means that any assertions made about the algorithm’s processes must be based on what can be observed rather than on a comprehensive analysis of the programming itself. In this article, such assertions are limited in their scope and are meant to refer only to the outcomes created and to bring attention to the ways obfuscation makes it difficult for creators to know what exactly they are doing “right” or “wrong.”

 

Additionally, many of the functionalities discussed are visible only to creators and not to viewers. Videos that have been demonetized are indistinguishable from videos that did not run ads to begin with, so the information I use about how these systems work has been gathered from YouTube users either discussing their videos that have been demonetized on social media or censoring parts of their videos for the express purpose of avoiding demonetization. This is commonly done through the manipulation of footage that can result in interference, such as zooming in on, covering, or distorting images, sounds, and words that are likely to attract the attention of YouTube’s content moderators. Like assertions made about the algorithm, my analysis of these systems and policies engages primarily with these interactions between creators and the site and avoids any specifics regarding the technology. YouTube also refuses to publicize, and prevents its creators from publicizing, the exact revenue that any given channel or video might generate, so the only external metrics of success that can be judged are subscribers, views, and “engagement,” which refers to likes, dislikes, and comments left on videos. Any discussion of a channel’s popularity is based on this data, alongside any featured displays of financial success within their content, such as the purchase of homes and cars.

 

“No, mom, I’m actually seriously crying.”

Despite their popularity, family channels are also a persistent target of criticism online. The search term “family channels” (in an incognito window to avoid personalized results) on YouTube brings up an even split of positive and negative videos in the top twelve results, and users like The Dad Challenge Podcast devote their content entirely to exposing the ills of this genre. There is an awareness of the damage done by family channel content, but at the time of writing, there has been no intervention from any regulatory body or from YouTube. The reasoning is complex, as Marina Masterson points out in “When Play Becomes Work: Child Labor Laws in the Era of ‘Kidfluencers’” (2021). Regulating the labor of what Masterson calls “Kidfluencers” requires unique considerations compared to other child labor laws, as “the social media context requires legislators to weigh the interest in protecting children from harm with a parent’s right to choose the activities of their child in a way that is distinct from traditional child acting” (2021, 591). Similar concerns have been voiced about the non-union labor of children on reality television shows like the CBS show Kid Nation (2007), whose producers stress that these performers are there “voluntarily” rather than as employees; this is despite their labor falling under the generally accepted legal definition of work: “(1) that the activity is controlled and required by the employer, and (2) that it is pursued necessarily and primarily for the benefit of that employer” (Greenberg 2009).

 

Masterson’s law review article focuses on the practice of kidfluencers accepting sponsorships and promoting products and services. This is perhaps the most obvious example of kidfluencer labor, as there is a contract and a direct exchange of money for the child’s performance. However, because it is the most obvious form of exploitation, a focus on kidfluencers as advertisers distracts from the far more troubling reality that much of family channel content does not appear to be work but still results in the exploitation of child labor. To argue the need for protections for the child stars of family channels and the overriding of parental authority, this section situates this hidden labor within the history of “private sphere” labor theory and activism. The work of the children in these videos and the smokescreen of “play” obscuring an analysis of labor conditions places family channels within Marxist feminist discourse on the role of domestic (re)production as an essential part of capitalist structures. This work is also an extreme example of the way capitalism blurs the lines between labor and self, increasingly demanding that the workplace takes on a central role in the identity of workers; in this case, children, whose lives from leisure activities to developmental milestones are fodder for monetizable content.

 

Previous scholarship has explored digital labor as a new frontier for the exploitation and commodification of new parts of workers’ lives. In his essay “Labor in Informational Capitalism and on the Internet,” Christian Fuchs argues that “the category of the produsage/prosumer commodity,” which is to say, those actors within participatory cultures who both produce and consume, “does not signify a democratization of the media toward a participatory or democratic system, but the total commodification of human creativity” (2010, 192). Ronald Day’s “Value and the Unseen Producers: Wages for Housework in the Women’s Movement in 1970s Italy and the Prosumers of Digital Capitalism” (2015) builds on Fuchs’s work, comparing the uncompensated prosumer to the uncompensated domestic work of women, both forms of unwaged labor producing value for the capitalist that employs the husband or sells the data of the Facebook user. Kidfluencers can be considered through much the same frame: their lives consist of unwaged performance that produces value for their parents and, more to the point, for YouTube, in the attention-driven internet economy.

 

Defenders might argue that the kidfluencer stars of family channels do receive compensation for the monetization of their lives, as their parents are able to provide for them, often in excess. The most popular families live in mansions worth millions of dollars and certainly have the means, in theory, to ensure that they live comfortably, perhaps even for the rest of their lives. However, returning to Masterson, the lack of Coogan Laws covering children on social media allows their parents to spend this money however they see fit. Coogan Laws, the first of which was passed in 1939, are designed to protect child performers by ensuring that 15% of their earnings are placed in a trust, as well as codifying issues like education, time off, and hours worked. However, these protections do not extend to kidfluencers due to there being no official “employment” of children involved in social media production. Success on YouTube is often fleeting for creators, whether the algorithm turns against them, they are driven off the platform, viewers become disinterested as the children age, or their popularity fades for other reasons. By the time kidfluencers are old enough to advocate for themselves, the money they earned could be gone – or, in cases like Huxley, they will no longer even have a legal claim to it. This situation also makes kidfluencers vulnerable to economic abuse, as this compensation-by-proxy can be withheld by their parents for any reason and with no possibility for legal recourse, because their work goes unrecognized as such.

 

Another potential argument against extending protections to kidfluencers is the idea that most videos featuring children posted by these channels represent an authentic documentation of play, rather than a performance one might see in a reality show – that it is not done under duress or against the children’s will but instead merely depicts the children’s natural play behaviors, identical to their offscreen habits. Editing mistakes in select videos can provide a window into the production process, revealing the ways family channel “authenticity” can be fabricated in post-production. In September of 2021, family vlogger Jordan Cheyenne deleted her channel after posting a video that contained footage of her coaching her eight-year-old son Christian on how to pose for a thumbnail. The vlog was an announcement to their audience that the family’s puppy had a potentially fatal disease, but after the announcement ends the clip continues. Cheyenne tells Christian to come closer to her and rest his head on her shoulder, and she then tells him to “act like you’re crying,” directing him to look at the camera and put his hand over his face without blocking his mouth. He repeatedly protests, replying, “No, mom, I’m actually seriously crying,” but she continues to give him orders to capture an image that will get clicks as a video thumbnail. When contacted by reporters, Cheyenne laid bare the dialectical tension of children’s “play” on YouTube. First, she emphasized that she is not a unique case how she treated her son, saying that “people will have their kids ham it up. Behind the scenes they’re like, ‘Do this, and I’ll give you a treat.’” Yet she also expressed the belief that the eight years of content on her channel “wasn’t a facade” (Abrahamson 2021), laying claim to some measure of “authenticity” in what she broadcast even as she acknowledged the way that incentivization can encourage kidfluencers to “ham it up.”

 

The false idea that performance work can be stripped of some of its exploitative power by virtue of being “authentic” behavior has been explored in depth by Heather Berg in her research on the labor conditions of porn actors. Her essays “Porn Work, Feminist Critique, and the Market for Authenticity” and “Sex, Work, Queerly: Identity, Authenticity, and Laboured Performance” explain how the dissolution of the work-life boundary and the increasing demand that workers bring their “true selves” to their job create “a type of emotional and communicative labor and a marketed commodity” (Berg 2017, 671) called “authenticity work.” I argue that, as in porn work, where “producers perceive that authentic scenes bring higher sales” and being seen as not “there only for the check” is considered part of being a good performer (Berg 2017, 673), kidfluencers’ on-camera behavior in family vlogs always requires authenticity work to ensure it will sell to audiences. We can see this in Cheyenne’s direction of Christian to perform his sadness in a way she feels will bring attention and thus value to the video; his genuine grief at his puppy’s illness is not enough, and the image is carefully mediated to create maximum impact. If, as Cheyenne says, this mediation is a common practice for other channels, then it is necessary to view all images from this genre skeptically, with an eye toward the ways their presentation tries to convince the audience that each performance did not require labor to produce.

 

Therefore, although government policy has not caught up to the concentrated forms of capitalism that have manifested online in the information age, it is crucial to include labor in our analysis of the harm of family channel content, even videos that do not explicitly use kidfluencers to sell sponsored products. The presence of the camera and the implicit understanding that children’s performances must convince an audience that the captured footage represents their real lives robs kidfluencers of any separation between work and play; their most personal lives, from the loss of a pet to their first time meeting a new sibling, are transformed into public spectacle meant to generate income they have no legal rights to.

 

Demonetized, Deprioritized

How, then, is YouTube complicit in this exploitation, if parents are the ones managing their children’s labor and demanding authentic performative work? To answer this, we must understand the history of the YouTube algorithm and how it creates a market for family channels to maximize the site’s profits. Since its inception, YouTube has gone from a user-generated file-sharing service to a complex ecosystem of creators who use the platform to build personal brands and launch their online careers. Google’s acquisition in 2006 brought about several changes that facilitated this transformation by making the platform a viable source of income for people in its Partner Program, but the corporate oversight of such a large entity also brought with it an ideological shift toward the interests of capital and thus a new labor relation between YouTube creators and their work, one that is especially troubling when children are the featured players. While not restricted by the need for approval and financial support from Hollywood studios or independently wealthy producers that shapes the production conditions of mainstream video content, YouTube users attempting to make careers as creators on the platform, referred to in this article as “YouTubers,” are nevertheless forced to create in ways that appeal to advertisers. If they do not censor and self-regulate their content to avoid risky topics and even swearing (“Recent Updates to the Advertiser-Friendly Content Guidelines - YouTube Community,” 2021), they may end up demonetized – a process that, despite YouTube’s statements to the contrary, probably impacts how often videos will be recommended by the algorithm, and thus the likelihood that creators will be able to build a following.

 

In the early days of YouTube, “Featured Videos” were manually placed on the landing page by the website’s staff. This allowed for some measure of control over the site’s content; videos that represented the content they wanted to see more of would be displayed prominently, likely inspiring other users to upload similar content. After the acquisition by Google and the launch of the Partner Program in 2007, a system was developed wherein videos were recommended based on “views,” which were registered after the initial click onto the video. Many YouTubers quickly developed forms of low-effort clickbait to game the system, with outrageous or sexualized thumbnails unrelated to the video content to drive clicks on their channels; to combat this, Google pivoted to watch time (on each individual video) and session time (from the time the first video begins to the time the user leaves the site entirely) as the primary metrics for automatic recommendation (Meyerson 2012). Despite this, there remained a “Featured Page” curated by employees, alongside new category pages like “Most Discussed,” all of which created an environment in which brand-new creators had a chance to find a wide audience.

 

This major shift during YouTube’s formative years brought with it a related change. Early videos could not exceed ten minutes, to discourage the wholesale reupload of copyright-protected content like television shows and to keep down the cost of video hosting and streaming. The limit did not solve the infringement problem, and ongoing litigation with Viacom over a one billion dollar suit regarding “brazen” and “massive” copyright violation on the site, filed in 2007 (United States District Court for the Southern District of New York 2007), led to the development of the “state-of-the-art Content ID system, as well as … other powerful tools for copyright owners” (Siegel and Mayle 2010), which manages copyright claims and copyright strikes to this day. When the Content ID system was launched in 2008, Viacom no longer sought damages for the 150,000 clips it claimed were previously hosted on YouTube, and, in 2010, Google had the confidence to allow policy-compliant users to circumvent length limitations completely. Combined with the new “midroll” ads, which began running in 2009 and today can only be placed on videos longer than ten minutes, it became profitable for the company to make the move to a recommendation system based on watch time. More users created longer videos, which could host more midroll ads, which then generated more revenue for both YouTubers and the site.

 

These evolutions set the stage for the introduction of the machine learning recommendation algorithm in 2016 (Geyer et al. 2016), a complicated system that uses the breadth of data collected on an individual user’s habits to offer them the videos most likely to increase their session time and subsequently YouTube’s profit margins. The algorithm has proved incredibly effective at its job of leading users to similar, more concentrated versions of the content they already like, and it has effectively kickstarted the move away from direct human oversight of most content on YouTube. However, it would only be a year before the system had to be changed again in the wake of the “Adpocalypse,” a mass exodus of advertisers from the site in 2017 when they realized their brands were being shown alongside extremist content, such as videos promoting terrorism and a video in which popular gaming YouTuber Felix “PewDiePie” Kjellberg paid people on the app Fiverr to hold up a sign that read “Death to all Jews” (Mostrous 2017, Alexander 2018). Terrorism and hate speech were already against the terms of service, but the lack of human oversight resulted in many people who violated these policies being monetized regardless.

 

YouTube promised a “tougher stance on hateful, offensive and derogatory content” (Schindler 2017) to reassure skittish advertisers that the platform was a safe place for their money. Advertisers were given five new content categories – “Tragedy and conflict,” “Sensitive social issues,” “Sexually suggestive content,” “Sensational and shocking,” and “Profanity and rough language” – and the power to withhold their ads from any videos that contained them, as well as three groupings known as “Expanded Inventory,” “Standard Inventory,” and “Limited Inventory” that did the work for advertisers by algorithmically sorting videos by varying levels of “sensitive content” (Kumar 2019, 4). The rules for the Partner Program were also changed so that only users with 1,000 subscribers and 4,000 hours of watch time were eligible (Synek 2018). While the “Discovery and performance FAQs” page provided by YouTube’s support site still claims that YouTube’s “search and recommendation system doesn’t know which videos are monetized and which are not,” experiments by creators show a direct link between a video’s monetization status and its viewership (JoergSprave 2018), suggesting that the ability to appeal to advertisers plays a direct role in algorithmic prioritization.

 

The new tools created in the wake of the Adpocalypse also made stark how algorithmic regulation through monetization status can do the work of further repressing marginalized people. LGBT YouTubers noticed that when the “family friendly” Restricted Mode was turned on, their content related to gay or trans identity and community was filtered out. YouTube’s vice president for product management at the time, Johanna Wright, said in response that “the bottom line is that this feature isn’t working the way it should” (Chokshi 2017). This returns us to one of the fundamental problems with black box algorithms: those who run them may be unable to anticipate bigoted algorithmic output that will materially harm members of oppressed classes living under their governance. This assertion does not shift blame off YouTube for the output of their algorithm. It instead points to the ways that reliance on computerized systems perceived to be neutral allows corporations like YouTube to shift blame off themselves and onto a “malfunction” in these supposedly neutral systems. Treating the algorithm as an apolitical force in content moderation makes it impossible to critique how “authority is increasingly expressed algorithmically” (Pasquale 2015, 8).

 

Before Google could even complete the implementation of these changes, vlogger Logan Paul posted a video containing a corpse he filmed in the Aokigahara Forest in Japan, a common location for Japanese people to commit suicide. Like Kjellberg, Paul was quickly removed from the Google Preferred program, launched in 2014 to match up marketers with top-performing channels (Wallenstein 2014), and it was announced that all videos in the Preferred Program would be manually reviewed (Alvarez 2018). Google Preferred was replaced with YouTube Select in 2020 (Sharma 2020), which Google claims has even more advanced controls to ensure brand suitability. The immediate fallout of these events saw a massive dip in revenue for creators across the board, and the long-term impact has been that “genres such as news, commentary, political shows and comedy seem particularly vulnerable to the loss of revenue due to the risk of partial or full demonetization” (Kumar 2019, 9), driving some creators off the platform and leading others to alternative revenue streams like Patreon and OnlyFans, where viewers can offer monthly donations to support creators’ work. While both platforms have faced their own criticisms for their business practices, many YouTubers with moderate rather than breakout success find them more stable means of making a living.

 

The crackdowns of 2017 came along with a promise to deprioritize “borderline content,” which does not violate terms of service but still contains rhetoric that could lead someone towards supporting harmful ideologies such as white supremacy. The company’s focus on keeping users watching and the unequal regulation of content suggested by the algorithm led to concerned employees, journalists, and viewers sounding the alarm about its tendency to funnel people looking for information into a rabbit hole of extremist content and conspiracy theories, with significant evidence to back up these concerns (Evans 2018, Evans 2019, Ribeiro et al. 2019). A concurrent idea called “Project Bean,” meant to resolve the dissatisfaction of creators following the Adpocalypse by creating a special algorithm that “would pool incoming cash, then divvy it out to creators, even if no ads ran on their videos” (Bergen 2019), was shelved for fear that it would worsen the problem of right-wing radicalization on YouTube.

 

As criticism continued, YouTube claimed that it was revamping its efforts in 2019. It announced that it would be “strengthening enforcement” of its existing Partner Program policies regarding hate speech (The YouTube Team 2019). This could be seen as an unintended admission of the unequal treatment that many YouTubers and viewers had been pointing out for years, as claiming to “strengthen enforcement” reveals that the policies’ existence was not actually being used to remove all content found in violation. CEO Susan Wojcicki claimed that the new human moderators tasked with delineating borderline content would consider the context of statements, but she also seemed to leave plenty of loopholes:

Susan Wojcicki: So, if you’re saying, “Don’t hire somebody because of their race,” that’s discrimination. And so that would
be an example of something that would be a violation against our policies.

Lesley Stahl: But if you just said, “White people are superior” by itself, that’s okay.

Susan Wojcicki: And nothing else, yes. (Stahl 2019)

When such allowances are made for blatantly white supremacist statements, it becomes nearly impossible for the YouTube community to report and deprioritize statements that function as dog whistles. For example, although the channel of white nationalist Stefan Molyneux was banned from the site in 2020 alongside the channels of Nazi Richard Spencer and former Grand Wizard of the Ku Klux Klan David Duke, Molyneux’s interview on the Rubin Report in which he argues for scientific racism regarding IQ disparities has nearly 1.4 million views at the time of writing (The Rubin Report 2017). With historical context, it is obvious that his words are not a neutral statement of fact regarding IQ rates in the United States, but rather are intended to imply that these rates are rooted in genetic differences. Moreover, the site had already identified him as a producer of borderline content when his personal channel was deleted. Neither of these facts have resulted in any response from the site regarding the Rubin Report interview.

 

Examining such cases makes it obvious that many channels perceived to be an entrance to the white supremacist rabbit hole remain running and monetized, even as opposing creators who explain their dog whistles are demonetized for discussing borderline topics in a critical light, calling into question Wojcicki’s statement that context matters. An example on the other side of this issue is the case of a history teacher who was banned from the platform for hate speech “due to clips on Nazi policy featuring propaganda speeches by Nazi leaders” (@MrAllsopHistory 2019), losing 15 years of educational resources posted to his channel. The overhaul also ostensibly prevented anything marked as borderline content from appearing on a user’s homepage or list of recommended videos, further marginalizing anything that the algorithm did not approve of and boosting things that would maximize YouTube’s cut of the advertising revenue.

 

The final major intervention that has caused strife in the YouTube community was the 2020 change in the treatment of content targeted at children. On September 3, 2019, the Federal Trade Commission reported that “Google LLC and its subsidiary YouTube, LLC will pay a record $170 million to settle allegations by the Federal Trade Commission and the New York Attorney General that the YouTube video sharing service illegally collected personal information from children without their parents’ consent” (FTC 2019). YouTube then passed the responsibility for this violation of the Children’s Online Privacy Protection Act (COPPA) to YouTubers who create content for children, removing comments, notifications, and the ability to serve targeted ads from any channel or video that the site designates as directed at children. The removal of notifications and comments has left many YouTubers unsure of their future, given the importance of engagement to the algorithm’s promotion of their work, especially in the hours just after it is posted. As Melissa Hunter, the CEO of Family Video Network, points out:

What’s left for people to advertise on? Because now you can’t advertise on content aimed at kids. They’re already not
monetizing a bunch of other content. They’re already not running ads on channels where people curse a lot, channels
where people talk about mental health issues. Videos about LGBTQ issues aren’t advertiser friendly. What do people
make? (Alexander 2019b)

This anxiety around the unclear guidelines for what will become demonetized creates a climate of fear on YouTube, where many people posting on the site make decisions to self-censor their content before it is even reviewed to protect their income, while those who produce hateful content that makes other users feel unsafe are allowed to post uninhibited.

 

In such a production climate, it makes sense that seemingly uncontroversial industries such as vlogs about family life would thrive; they are unlikely to be demonetized for controversial or inappropriate content, nor do they fall under the heading of content made specifically for children that would bring them under greater scrutiny or restrict their income. Such videos also escape regulation under community guidelines regarding child safety because of the way their content masquerades as authentic play behaviors, as shown in the previous section; while a few channels have been removed by YouTube, such as the deletion of 2017’s 68th most popular channel, “Toy Freaks” (Perez), the site has no incentive to recognize the majority of family channel content as the exploitative child labor that it is. On the contrary, evidence points to the idea that their algorithm continues to promote family channels because of the value they bring to the site. Long run times and “family friendly” content make these videos a safe choice to promote in order to maximize advertising revenue.

 

Conclusion

This article has brought attention to an undertheorized form of unwaged domestic labor and its exploitation, that of kidfluencers and the authenticity work that drives their success. It has also explained how YouTube’s regulatory mechanisms increasingly based on machine learning create a production environment that encourages and rewards the use of uncompensated child labor. As neoliberal capitalism continues to break down work-life boundaries, turn hobbies and leisure time into new opportunities for (corporate) profit, and shape the digital landscape to feed its algorithms with valuable data about the prosumers that populate it, cases like kidfluencers become indicative of these practices, serving as a reminder of how people in capitalist societies are commodified and encouraged to commodify themselves to benefit the bottom line of technology corporations in ways unrecognizable as traditional waged labor. The analysis is a call for a reevaluation of the rights of parents to be the sole decision makers for their children. While I do not propose that any one solution is perfect for resolving this issue, it is clear that the minimal rights of the children who star in this content due to the U.S. concept of freedom to parent as any individual sees fit (with the obvious exception of abuse), together with the lack of oversight of the more subtly harmful and abusive practices born from publicizing the lives of children online, contribute to the problem of labor exploitation in family vlogging.

 

It is also a call for further research scrutinizing the role of black box algorithms in labor exploitation online and in the gig economy. While this discussion focuses on the kidfluencers of YouTube because of the unique precarity of their position, workers across the gig economy are subject to a set of economic harms specific to this category of labor due to the role of algorithms in controlling their work, and the black box makes them unable to access the facts that could protect them (Muller 2020). As the deindustrialization of the U.S. workforce continues, new modes of labor activism will be necessary in response. The protracted struggle between those who own the means of production and those who must sell their labor to them has long since entered the digital era. Moreover, the gutting of unions by decades of propaganda has left workers even more vulnerable than in previous decades. What rises to confront the algorithmic alienation of the contemporary proletariat has yet to fully manifest, but it is crucial for the safety of millions that something does.

References

Abrahamson, Rachel Paula. 2021. “YouTuber Jordan Cheyenne Deletes Account after Video Coaching Son to Cry.” TODAY.Com. September 14, 2021. https://www.today.com/parents/jordan-cheyenne-speaks-out-about-youtube-video-son-crying-t231055.

Alexander, Julia. 2018. “The Yellow $: A Comprehensive History of Demonetization and YouTube’s War with Creators.” Polygon. May 10, 2018. https://www.polygon.com/2018/5/10/17268102/youtube-demonetization-pewdiepie-logan-paul-casey-neistat-philip-defranco.

 

———. 2019a. “YouTubers Say Kids’ Content Changes Could Ruin Careers.” The Verge. September 5, 2019.

https://www.theverge.com/2019/9/5/20849752/youtube-creators-ftc-fine-settlement-family-friendly-content-gaming-minecraft-roblox.

———. 2019b. “YouTube Claims Its Crackdown on Borderline Content Is Actually Working.” The Verge. December 3, 2019. https://www.theverge.com/2019/12/3/20992018/youtube-borderline-content-recommendation-algorithm-news-authoritative-sources.

 

Alvarez, Edgar. 2018. “Logan Paul Forced YouTube to Admit Humans Are Better than Algorithms.” Engadget. January 19th, 2018. https://www.engadget.com/2018-01-19-logan-paul-youtube-algorithm-changes.html.

Berg, Heather. 2015. “Sex, Work, Queerly: Identity, Authenticity, and Laboured Performance.” In Queer Sex Work, ed. Mary Laing, Katy Pilcher, and Nicola Smith, 23–32. London: Routledge.

———. 2017. “Porn Work, Feminist Critique, and the Market for Authenticity.” Signs: Journal of Women in Culture and Society 42 (3): 669–92. https://doi.org/10.1086/689633.

Bergen, Mark. 2019. “YouTube Executives Ignored Warnings, Letting Toxic Videos Run Rampant.” Bloomberg.com. April 2, 2019. https://www.bloomberg.com/news/features/2019-04-02/youtube-executives-ignored-warnings-letting-toxic-videos-run-rampant.

 

Boyle, David. 2012. “Upcoming Improvements to Accuracy of Subscriber Counts.” Blog. YouTube. January 10, 2012. https://blog.youtube/news-and-events/upcoming-improvements-to-accuracy-of/.

Cannon, Jack, director. 2007. Kid Nation, CBS.

Caplan, Robyn, and Tarleton Gillespie. 2020. “Tiered Governance and Demonetization: The Shifting Terms of Labor and Compensation in the Platform Economy.” Social Media + Society 6 (2): 2056305120936636. https://doi.org/10.1177/2056305120936636.

Chokshi, Niraj. 2017. “YouTube Filtering Draws Ire of Gay and Transgender Creators.” The New York Times, March 20, 2017, sec. Technology. https://www.nytimes.com/2017/03/20/technology/youtube-lgbt-videos.html.

Coffee Break. 2019. What 40,000 Videos Tell Us About The Trending Tab. https://www.youtube.com/watch?v=fDqBeXJ8Zx8.

Day, Ronald E. 2015. “Value and the Unseen Producers: Wages for Housework in the Women’s Movement in 1970s Italy and the Prosumers of Digital Capitalism.” The Information Society 31 (1): 36–43. https://doi.org/10.1080/01972243.2015.977632.

“Discovery and performance FAQs - YouTube Help.” n.d. Accessed November 13, 2021.  https://support.google.com/youtube/answer/141805?hl=en&visit_id=637724544922262323-37199717&rd=1#zippy=%2Cdoes-monetization-status-yellow-icon-impact-my-video-discovery

Evans, Robert. 2019. “How YouTube Became a Perpetual Nazi Machine.” Behind the Bastards. https://www.iheart.com/podcast/105-behind-the-bastards-29236323/episode/how-youtube-became-a-perpetual-nazi-46259210/.

———. 2018. “From Memes to Infowars: How 75 Fascist Activists Were ‘Red-Pilled.’” bellingcat. https://www.bellingcat.com/news/americas/2018/10/11/memes-infowars-75-fascist-activists-red-pilled/.

Fuchs, Christian. 2010. “Labor in Informational Capitalism and on the Internet.” The Information Society 26 (3): 179–96. https://doi.org/10.1080/01972241003712215.

Geyer, Werner, Paul Covington, Jay Adams, and Emre Sargin. 2016. “Deep Neural Networks for YouTube Recommendations.” RecSys '16: Proceedings of the 10th ACM Conference on Recommender Systems, 191–98. ACM. https://doi.org/10.1145/2959100.2959190.

“Google and YouTube Will Pay Record $170 Million for Alleged Violations of Children’s Privacy Law.” 2019. Federal Trade Commission. https://www.ftc.gov/news-events/press-releases/2019/09/google-youtube-will-pay-record-170-million-alleged-violations.

Greenberg, Adam P. 2009. “REALITY’S KIDS: ARE CHILDREN WHO PARTICIPATE ON REALITY TELEVISION SHOWS COVERED UNDER THE FAIR LABOR STANDARDS ACT?” Southern California Law Review 82 (3): 595–648.

JoergSprave. 2018. Debunked: YouTube Caught Lying! (YouTubers Union Video). https://www.youtube.com/watch?v=Tn5rOOfW7bc.

Kumar, Sangeet. 2019. “The Algorithmic Dance: YouTube’s Adpocalypse and the Gatekeeping of Cultural Content on Digital Platforms.” Internet Policy Review 8 (2). https://doi.org/10.14763/2019.2.1417.

Masterson, Marina A. 2021. “When Play Becomes Work: Child Labor Laws in the Era of ‘Kidfluencers.’” University of Pennsylvania Law Review 169 (2): 577–607.

Meyerson, Eric. 2012. “YouTube Now: Why We Focus on Watch Time.” Blog. YouTube. https://blog.youtube/news-and-events/youtube-now-why-we-focus-on-watch-time/.

Mostrous, Alexi. 2017. “Google Faces Questions over Videos on YouTube.” The Times. https://www.thetimes.co.uk/article/google-faces-questions-over-videos-on-youtube-3km257v8d.

Mr Allsop History. 2019. “YouTube Have Banned Me for ‘Hate Speech’, I Think Due to Clips on Nazi Policy Featuring Propaganda Speeches by Nazi Leaders. I’m Devastated to Have This Claim Levelled against Me, and Frustrated 15yrs of Materials for #HistoryTeacher Community Have Ended so Abruptly. @TeamYouTube.” Tweet. @MrAllsopHistory (blog). June 5, 2019. https://twitter.com/MrAllsopHistory/status/1136326031290376193.

Muller, Zane. 2020. “Algorithmic Harms to Workers in the Platform Economy: The Case of Uber.” Columbia Journal of Law and Social Problems 53: 44.

NikkiPhillippi. 2018. We’re Not Adopting from Thailand Anymore. https://www.youtube.com/watch?v=wYUw3Hq8vNg.

Pasquale, Frank. 2015. The Black Box Society: The Secret Algorithms That Control Money and Information. Cambridge: Harvard University Press.

Perez, Sarah. 2017. “YouTube Terminates Exploitive ‘Kids’ Channel ToyFreaks, Says It’s Tightening Its Child Endangerment Policies.” TechCrunch (blog). November 17, 2017. https://social.techcrunch.com/2017/11/17/youtube-terminates-exploitive-kids-channel-toyfreaks-among-broader-tightening-of-its-endangerment-policies/.

“Recent Updates to the Advertiser-Friendly Content Guidelines - YouTube Community.” 2021. https://support.google.com/youtube/thread/64073546/recent-updates-to-the-advertiser-friendly-content-guidelines?hl=en&msgid=104557807.

Ribeiro, Manoel Horta, Raphael Ottoni, Robert West, Virgílio A. F. Almeida, and Wagner Meira. 2019. “Auditing Radicalization Pathways on YouTube.” ArXiv:1908.08313 [Cs], December. http://arxiv.org/abs/1908.08313.

The Rubin Report. 2017. Stefan Molyneux on Controversies (Pt. 2). https://www.youtube.com/watch?v=T0KKc6GbeNo.

Siegel, Joshua and Doug Mayle. 2010. “Up, Up and Away - Long Videos for More Users.” Blog. YouTube. https://blog.youtube/news-and-events/up-up-and-away-long-videos-for-more/.

Sharma, Vishal. 2020. “Make the Best of YouTube Yours with YouTube Select.” Google. https://blog.google/products/ads/introducing-youtube-select/.

Stahl, Leslie. 2019. “How Does YouTube Handle the Site’s Misinformation, Conspiracy Theories and Hate?” https://www.cbsnews.com/news/is-youtube-doing-enough-to-fight-hate-speech-and-conspiracy-theories-60-minutes-2019-12-01/.

 

Synek, Greg. 2018. “YouTube Raises Activity Requirements for Partner Program Monetization.” TechSpot. https://www.techspot.com/news/72792-youtube-raises-activity-requirements-partner-program-monetization.html.

Viacom International Inc. v. YouTube, Inc. 2010. United States District Court for the Southern District of New York.

Wallenstein, Andrew. 2014. “YouTube Unveils Google Preferred at NewFronts Event.” Variety (blog). April 30, 2014. https://variety.com/2014/digital/news/youtube-unveils-google-preferred-at-newfronts-event-1201168888/.

Wojcicki, Susan. 2020. “YouTube at 15: My Personal Journey and the Road Ahead.” Blog. YouTube. https://blog.youtube/news-and-events/youtube-at-15-my-personal-journey/.

YouTube. 2006. A Message From Chad and Steve. https://www.youtube.com/watch?v=QCVxQ_3Ejkg.

———. 2010. YouTube Answers: Ads & Advertising. https://www.youtube.com/watch?v=jF--uLxtYlo&t=88s.

 

“YouTube Partner Program Overview & Eligibility - YouTube Help.” n.d. Accessed April 26, 2021. https://support.google.com/youtube/answer/72851?hl=en.

The YouTube Team. 2019. “Our Ongoing Work to Tackle Hate.” Blog. YouTube. https://blog.youtube/news-and-events/our-ongoing-work-to-tackle-hate/.

bottom of page