Cindy Krum discusses Google’s manipulation of search and data collection through Chrome, revealing how this affects SEO and user privacy.
Highlights
📈 Google uses Chrome data for search ranking algorithms.
🔎 Evidence shows Google misled about mobile-first indexing.
💻 Chrome’s local processing aids Google in data collection.
🕵️♀️ Users unknowingly contribute to Google’s AI training.
🚫 Google bypasses privacy by ignoring user settings.
🌐 Chrome’s updates correlate with algorithm changes.
⚖️ Potential legal implications of data misuse are raised.
Key Insights
🔍 Manipulation of Search: Google’s practices of using engagement data from Chrome to influence search rankings question the integrity of search results. This raises concerns about transparency and fairness in digital marketing.
🚀 Mobile-First Indexing Complexity: The confusion surrounding mobile-first indexing reveals Google’s evolving strategies, indicating a shift towards data-driven methodologies that prioritize user behavior, complicating SEO practices.
⚙️ Local Processing Utilization: By leveraging users’ devices for data processing, Google reduces its own operational costs, raising ethical concerns about consent and user awareness in data usage.
🧠 AI Training Dependence: Google’s reliance on user data to train AI models shows a growing interdependence between user behavior and the effectiveness of advertising, potentially compromising user privacy.
🔒 Privacy Erosion: The trend of Google ignoring privacy settings suggests a proactive approach to data collection that could violate user trust and regulatory guidelines, prompting calls for stricter privacy protections.
📊 Correlating Updates: The increase in Chrome updates correlating with algorithm changes hints at a strategic approach where Google continuously adapts to enhance its data collection capabilities.
⚖️ Legal and Ethical Challenges: These practices could lead to significant legal challenges, especially in regions with stringent privacy laws, as they may be seen as monopolistic and invasive, potentially inviting regulatory scrutiny.
Transcript:
00:00:02 hi everybody my name is Cindy crom and I’m the CEO of a company called mobile Moxy based in Denver Colorado we do mobile SEO and ASO Consulting and have a mobile SEO and ASO tool set to go with it this is a talk that I originally presented last week at the G50 Summit in Austria called monopolies and manipulation how Google uses Chrome to monopolize search I tied for first place with Eric Woo who is talking about the future of AI so so I’m excited to present it to you today so let’s get into it how complicit are you willing to
00:00:36 be in a crime or a coverup today I’m going to tell you a story that’s going to seem unbelievable it is going to sound like a conspiracy theory but it is all true it is a story about how Google manipulated the entire world and especially the leaders of the SEO industry to support their illegal Monopoly it is a Monumental shift in our understanding of how Google works and it is something that’s been hiding in plain sight that everyone has missed until now so let’s start with the beginning and that is to say let’s start
00:01:12 by understanding that Google is not an accurate narrator of what’s been going on this might be overstating the obvious but what we’ve learned everything that’s come out of the doj testimony has shown that Google is hiding and destroying evidence judge Donado said that he’s never seen anything so egregious after viewing disturbing evidence that Google’s auto erase feature had deleted reams of key employee chat logs later he said this conduct is a frontal assault on the fair administration of justice it
00:01:49 undercuts due process it calls into question just resolution of legal disputes it’s antithetical to our system so this can’t understated that Google has been lying not just to us but to judges in the cases that are adjudicating their actions they have been lying they have been covering things up now what we know in our industry that’s come out of some of the doj findings and other things is that Google is using clicks and engagement in their algorithm something that they denied or avoided talking about honestly for many
00:02:29 years and became visible and proved recently with the leak not that leak this leak uh that was identified by a number of people working together Rand fishkin Mike King and dejon petravic um that showed that Google is using a number of signals uh or at least was at the time to evaluate behavior and use it as part of their algorithm and in fact doj evidence showed that it was one of of a thre part model so only three and it was 1ir of the algorithm that that Google for years has said that they didn’t use
00:03:10 clicks or that they didn’t use clicks from search result as a ranking Factor but in fact they do or they use it and they call it a proxy of Engagement and in that case they do use it as a ranking Factor so all of this became evident when the leak came out but it’s actually been something that we have been able to see in Chrome histograms on any Chrome any computer where Chrome is installed just type in chrome/ histograms and you’ll see your own tracking of all of the tabs that you’ve got open on your computer and
00:03:46 where you tracked where you clicked this this screenshot shows a click on an anchor element it shows the distance from the previous clicks it shows the X and Y location and what whether it’s in landscape or portrait and you can see this is happening in a mobile version of Chrome this is a desktop version of Chrome and you can see this is uh showing histograms about uh client ads and Page load percent B added and on the bottom it’s navigation timing related to the Google search so Google search navigation timing and the page load so
00:04:22 it’s showing how people are behaving and the tracking of that after they click from a search result and if you think this is only because I’m logged in you’re wrong it’s also showing in incognito mode all the way down to showing if you filled out a credit card form with an autofilled or a stored credit card if you filled out regular forms it’s capturing a lot of information including things about not just clicks and interaction but how quickly the page loaded how far the user scrolled and what they engaged with so
00:04:59 Google is is not just tracking a little bit of Click data they’re and click data from search results they’re tracking and passing a lot of this data back to their algorithm now the 2018 change to mobile first indexing is a bigger deal than people realize and this is where things get interesting because we have to think back about when Google started talking about mobile first indexing and how they announced they bungled the launch of mobile first indexing because at first they talked about it as indexing calling it mobile
00:05:37 first indexing but then when they were describing it they talked about crawling and how the new crawler was a mobile only crawler or mobile first crawler and then after it launched they talked about it really in terms of rendering and how it was about JavaScript and the previous crawler wasn’t good at rendering all of the JavaScript but this crawler was and as seos we know that historically uh crawling indexing and ranking and then adding rendering in those are all separate processes so why did Google
00:06:12 talk about these things all together why did it get so confused and so conflated and you might say well Cindy who cares Google messes up launches all the time search engines bungle communication all the time especially Google but this is where the real story begins so this is where it gets interesting we know that Google bch botched the launch of mobile first indexing until mobile first indexing Google had never rendered JavaScript when crawling they didn’t want to they thought it was a bad idea for their
00:06:49 crawler it was a security risk it was slow it was resource intensive and expensive so they just never wanted to do it so then that begs the question why in mobile first indexing were they all of a sudden able uh to render JavaScript and then to what extent were they doing it and so two two very smart guys researched this the one on the left is Tom Anthony and he found when in his research that Google mobile first bot was only executing JavaScript 2% of the time now that seems a bit low the guy on
00:07:27 the right is an ex googler until recently working for Google and then leaving to work at a company called verel he was their performance Guru and his name is multi ubel he says in his new role that they researched it and found that Google mobile crawler is executing JavaScript 100% of the time now that’s a big difference 98% difference between Tom Anthony’s research and multi uel’s research why is it so big let’s dig in here’s the article by multi ubel where he says that 100% of HTML Pages resulted in a full
00:08:08 page render including PL pages with complex interaction it just seemed like too much 2% seemed too low 100% seemed wasteful to me so I reached out to malti on Twitter um and I asked him I said hey uh I read your article can you tell me more about the methodology were you just looking at new pages were you just looking at Short Pages was there anything special about the methodology uh that might have changed your results and he responded and he said check my LinkedIn not only am I 100% confident in
00:08:44 the outcome of the research I happen to have known the answer before it started so how did he know before it started well he helped build the system so how is that possible if if Google’s Bond Google mobile is executing JavaScript 100% of the time we know that that would take too many resources that would burn up servers not only at Google to crawl and execute all the JavaScript that they see on the web but it would burn up our own servers as businesses if Google was you know crawling the entirety of the site or even a majority
00:09:24 of the site all the Deep pages every single day that would be resource intensive enough that businesses would complain and say hey like you don’t need to come this often you don’t need to to use our servers to execute that JavaScript or to to request and render that JavaScript for you and for you to process it that is so resource intensive it would burn cash it would ultimately in in all the energy it took burn up the planet that would be so irresponsible of Google to crawl that way but what we know about mobile first
00:10:03 indexing is this after it launched Google explained it as a two-phase rendering process where they would crawl an index first with kind of a normal HTML crawler that wasn’t really executing much JavaScript in the first wave and then the second wave would come back uh and render the JavaScript so that it could be processed now we never at the time questioned this part of their diagram that says as a rendering resource becomes available but I think this is where the difference comes in Tom Anthony was checking on the
00:10:44 first part of Mobile first indexing and seeing that the mobile first indexing bot really hardly ever executed JavaScript 2% but what malti was looking at was the second phase of mobile first index and what we know that malti probably also knows is that 99.9 uh percent of chrome users on mobile and desktop have JavaScript enabled so what I think and what I believe is happening here is that Google failed to tell us at this time in 2018 when it launched that what they were using for the second phase of indexing
00:11:27 was not a bot per se it was our own computers in our homes your Chrome being used as a rendering resource became available that means you as someone requested the site and executed the JavaScript they would go and fetch that from their computer they wouldn’t use their bot to render it they would wait until a user rendered the page for them and then they would just go capture that that full p page render so that they could process it and this isn’t a wild or new model this is something that that companies all over
00:12:09 use uh but they use it a little bit different differently you can think of SE at home which was used uh for space exploration or Bitcoin mining or protein folding all of these things allow you to opt in your computer when it’s not being used your compute power your downtime your processing power on your own devices and say when I’m not using it you can use it and in some cases you know or in these cases at least you you knowingly opt in and you get some benefit either you’re helping with space or exploration you’re getting actual
00:12:47 currency back from Bitcoin mining or you’re helping with uh health research right you get at least a warm fuzzy but in this with Google using our own computers to pre-process information for indexing and our own browsers to to capture information and rendering we haven’t necessarily opted into this and we’re not knowingly getting anything back um so so this is what this looks like is that we now know that it’s not just about what came out in the leak um that Google is using our click data but Google is also using our rendering data
00:13:28 and our Behavior in terms of making models like cohort models and topic models history and engagement models and they’re using this all taking it from our local computers without permission and passing it up to their processors now it’s pre-processed locally uh so that it can be batched and sent up and then sending it to their algorithms to to be further processed and evaluated and that’s how they’re able to get the rankings that they do but also that’s how they’re able to understand things like demographic cohorts Journeys
00:14:06 where you shop and and make decisions and understanding and modeling that so that they can use that data in their advertising models in pmax in PBC campaigns they’re using our own behavior to Market to us and to train AI that serves ads to do a better job and so we know that this is going to be successful because Google is the only search engine that has anything close to Chrome’s market share Chrome has 65.1 18% market share around the world um and so this is what’s helping entrench their moat and it’s preventing other potential
00:14:51 competitors in the search engine space from ever having enough data to compete they have the browser they get the data from there they turn it around and use it to power their ad models their search engines their topic understanding and their AI now we’re getting ahead of oursel let’s let’s think maybe you don’t believe me maybe you don’t believe that mobile first indexing is using Chrome data but in Chrome rendering let’s go to Ben gnomes who in a interview with Fast Company by Harry Kraken from 2018 Ben
00:15:26 said or he framed Google’s challenge as take the page rank algorithm from one machine to a whole bunch of machines and they weren’t very good machines at the time now do you think he was talking about Google technology saying they weren’t very good machines at the time no he was talking about our home computers and this is why for instance you may have noticed an uptick in Chrome updates Chrome Now updates it feels like sometimes twice a week this is the the graph of the number of major updates
00:16:01 that that Chrome has launched but this doesn’t even include the minor updates within an update right which we know have been increasing um to to kind of an excessive rate it seems like sometimes and these updates the major updates do to some degree also sometimes correlate with algorithm updates and this is happening because Google needs the model that they’re using on the computer to capture the data to match what it’s using in the CL Cloud to further process the data this makes so many things make
00:16:34 sense it also explains why after mobile first indexing launch launched a bunch of other things launched we all of a sudden got this thing called core web vitals we had to have new search console and it new old search console couldn’t be updated we needed a whole new one Google all of a sudden stopped worrying about cloaking something that Google used to worry about a lot where Google didn’t you know Google actively told seos that you can’t show the bot one thing and users another they stopped messaging on
00:17:06 that because they weren’t worried about it anymore because they were seeing exactly what users were seeing it also explains why Google changed their understanding of robots text where it used to be that you had the one robots text file and Google would check it when it began crawling a website it would check it that one time and use those rules across the entirety of the crawl now that robot’s text file has become just a suggestion and Google insists that you have to put your robots instructions on
00:17:37 every single page why do we think that changed because Google isn’t crawling the same way they used to like water flowing through the website based on clicks it’s grabbing pages one thing at a time rendered from local computers so let’s think about core web vitals what was the big deal about Corb vitals how did they mess message this when it came out the big deal when Google launched cor web vitals was that it was a combination of what they called synthetic data and field data now all of Google ghoul’s tools previously had
00:18:14 worked on synthetic data so fake tests that it kind of ran in its brain to understand what a website would look like but field data was real user data what they call real user metrics or Rum metrics and they told us in kind of an you know a no big deal way that core web vitals was capturing real user loading experiences and putting them in this big database that they called cruxs and then reporting back to us not just on how the Sy synthetic page load worked but how real users were experiencing the page
00:18:54 and we kind of just took it in stride and and said okay that’s great but no one really stopped to say hey like okay how often are you using real user metrics real user data and when you say real users you mean from chrome right yeah so that was a big deal and we didn’t really even notice it but if you look even in the documentation about cor web vitals on web.dev you can see that they’re saying kind of outright that uh if we don’t have have enough data if the response in Corb vitals is waiting for input it’s because
00:19:34 in this case the metric called first input delay or FID requires user interaction to be measured So to that extent they don’t have any local FID data they wait until someone has interacted with the page to be able to measure that and then later there’s a newer core web vital called inp and in their documentation you know they answer the question well what if no inp Val is reported and that they say well the answer is the users maybe haven’t clicked or tapped or pressed a key on the keyboard maybe the page loaded but
00:20:07 they interacted using gestures or something like that maybe it was just accessed by a bot or a headless browser that wasn’t clicking around so again here they’re kind of admitting and they’re saying yes Corb vitals and especially FID and inp are measuring not just page load but interactions of how real users are behaving in Chrome on the page and they’re reporting that back to you I think that it this is important to understand that Google is probably extrapolating more information out of these clicks and probably prioritizing
00:20:42 Crawling by engagement data if you have a link on a page and it’s never getting any clicks Google would think well why would I even crawl it users don’t seem to care about it users don’t want that information don’t need that linked we probably don’t need to crawl it so this is where it’s fundamental shift in how Google is understanding the web they’re allowing user clicks and engagement data to reprioritize what and when they crawl so wait maybe there’s more right what if things get deeper and we need a
00:21:20 little bit more tinfoil here we know that Google is actively changing how they Capt and cache data um and they’re usually doing a lot of this under the guise of speeding up browsing this case mobile browsing and and this is something that Google started in you can see the date here uh February of 2019 Google started working on something called the back forward cache or BF cash um this article talks about it eats more RAM you know for years we’ve talked about why Chrome is slow or heavy why it needs so much
00:22:00 processing power probably because it’s grabbing more data and doing more processing than we realize and I think that Google may be trying to encourage us to speed up our websites because the faster we make our websites the more local processing can be done without a user really noticed noticing the Slowdown right so they they did this thing called BF cach and and the way it’s supposed to work is when you click around on a website then you hit the back button Google is saying well we shouldn’t have have to reload and
00:22:30 reprocess the entirety of that page we should have a full snapshot of that page and so that’s what they were doing for years but just recently Google changed how they do BF caching and there used to be a way to prevent it called No store HTTP header and if you had the no store header set Google couldn’t use the BF cache that was frustrating for Google so just recently you can see this is December 2023 Google just started ignoring that header and even if you said Google don’t store this don’t BF cash this Google is
00:23:05 like eh we’re going to do it anyway and again they say that they’re doing this to boost performance but the point of BF caching is to get a fully executed snapshot of the page now what else could Google be doing with that well we know from Google’s research papers that are published that they could be doing things like this getting full an ations that break the page down and describe it finding questions and answers on a page getting a summary of the page and see how it’s breaking it all into grids it
00:23:41 needs the fully rendered version of the page to get that and to do the summarization to do all of the cool new things that its AI is trying to do it needs that full snapshot right so they’re doing all these things they’re saying users benefit cuz they’re speeding it up but there’s more there than what they’re telling us the things to know is that this may have caused some Fallout right what we saw earlier this year was that all of a sudden Google was found to have been indexing links to private WhatsApp groups things
00:24:18 behind firewalls that they shouldn’t have been able to get to how did Google get behind the firewall when they didn’t have a login or a password well probably users were viewing it and it got BF cached and it got processed because it was behind a firewall the the webmaster didn’t think they needed a robot’s text instruction and so it got crawled and indexed same thing with these Google Docs you know this is uh something that Lily Ray found where it was just a kids paper on JF Kennedy and the death
00:24:53 penalty but it ranked cuz it was in Google Docs and it was rendered by a Chrome browser and somehow you know Google got it grabbed it indexed it to rank it same thing with these private Google Groups that all got crawled and indexed accidentally you know these were private groups that should have never been crawled in index but they were shown in Chrome and so they accidentally got into the index and that may you know maybe this is no harm no foul But realize that Google is seeing a lot of things behind firewalls that that we
00:25:24 think are private and in fact in this case we know that Google has a huge stake in Reddit and if Google is processing not only our clicks and our Behavior but understanding who we are as a cohort who we are demographically who we are in the larger world and then in Chrome they’re able to capture it preprocess it all using our computer even if we’re incognito but then we get marketing from Reddit that says it’s okay to overshare keep your overshare on Reddit tell us all your secrets we’ll keep your name private for you like you
00:26:00 can do it under a username that’s not publicly associated with you the problem is that it’s still associated with you and your behavior in Chrome Google can tie it all back together so this that light seems especially nefarious uh but it doesn’t stop there Lucas C costado came out on Twitter and said that Google Chrome gives all. goole.com sites full access to system tab CPU usage GP usage and memory usage it also gives access to detailed processor information and provides logging back Channel this API is not
00:26:41 exposed to other sites only to google.com sites so this is another leak that the SEO Community didn’t didn’t hear so much about this is what it looks like in the chromium code and you can see the code all right there that it’s grabbing all the tabs all the the local processing usage and what it’s um described as is something to help manage processing power for Google meet so anywhere where Google meet might be used Google wants to manage how much processing power is presumably so that it can help manage video quality on the
00:27:17 fly now that’s a reasonable thing for Google to want to do the problem is that we know that we sign off on them using that data uh all the data that they Capt from us in any way they want so they may just because they have a good excuse to do it doesn’t mean they stop doing things with it once they’re done doing that this is just the opening excuse and so I went ahead and reached out to Jean petravic again super smart and he even showed me the the the folders on the computer where a lot of the processing
00:27:53 is done so you can see this is on my local Google Chrome user data optimization guide model store um and he the numbers in there they’re just folders in there they’re just numbered but he helped me kind of decode what’s going on in each folder you so you can see folder two is language detection folder nine is web permissions 13 autoc complete 20 web permissions prediction modeling uh fishing autocomplete scoring history clusters visual search uh unknown might be about conversions and then text safety but this is just a
00:28:28 guess and we have some folders that we haven’t figured out yet so this could be deeper um kinds of user processing modeling Journey modeling language processing language understanding topic modeling all of this could be happening uh on local folders that are just numbered in our computer because they don’t want to tell us exactly what they’re doing so as seos what we have to question is the loss that many many sites are seeing in organic clicks I believe it’s not an accident because we know uh based on Google’s own
00:29:06 documentation and their permission uh documents that we have to agree to that say Google is using this data across all of its services and it reminds us in changes to the t’s and C’s that when you link different accounts um that the data is shared across all YouTube ad services Google Maps search Chrome Google Play Google shopping all of these are combining data to help personalize contents and ads develop and improve our services which is quite broad measure and improve the delivery of ads and then
00:29:42 perform other purposes described in Google Google’s privacy policy so they updated these T’s and C’s right after laws in the EU changed um to require consent uh about about data sharing the other thing that you need to know is that laws in the EU prevent browsers from being Gatekeepers of specific kinds of data uh they have to make it public but the idea that we saw with that um Google meet secret plug-in was that it was only going to google.com domains a and what you need to know is that that leak let me go back to that for a second
00:30:24 this leak was a plugin that came with all versions of chrome including things that ran on chromium like Brave and Internet Explorer but it was never shown in the extensions menu it was a secret extension that was hidden uh that you couldn’t opt out of now Brave has updated and made it something that you can opt out of but it’s still in there by default capturing all that data so there’s a lot of data being passed in uh across all these services including CPU GPU usage and what you do and all your
00:31:02 behavior in all of these services and you know for a long time there were these theories and these kind of scares in the SEO Community where seos would say oh I don’t want to set up Google analytics because um Google could use the Google analytics to perhaps uh change my rankings and Google would deny that patently but the thing is they don’t have to use Google analytics they have the origin of all the data they can process it themselves um they can go directly to the source and this is another uh another kind of notification
00:31:38 about enhanced ad privacy in Chrome and they specify here that uh to measure the performance of an ad uh limited types of data are shared between sites such as whether you made a purchase after visiting the site so they’re saying yes we’re using long-term attribution um to feed our model maybe you didn’t purchase after you clicked the ad but you purchased it later on we want to be able to model that too um and so what we see here is that the utility of chrome kind of changed at first Chrome was launched just to help
00:32:13 support Google as a search engine but then during mobile first indexing Chrome changed to support the launch of mobile first indexing uh and to mitigate the the risk associated with the loss of thirdparty cookies something that Google was planning on for a long time that’s why you know they Justified internally needing to capture all of this data uh but now Chrome is just being used to support data collection for things like core web vitals cohort ad Target targeting Journey modeling discover targeting and other things that they
00:32:44 consider business critical objectives the problem is we never agreed to this and they’re not being clear about it um and it’s something that that needs a little bit more clarification uh because I think this Chrome data is feeding the Monopoly More Than People realize um because we’re getting things like um very targeted ads in Google discover um because you know this is Lily Ray she gives these examples about Google discover where um she’s purchased Lulu align leggings and now all of a sudden she’s getting the
00:33:19 deals uh for used uh Lulu align leggings uh in in Google discover and this is because they know who she is she’s signed in you know in all of her devices and so they can say well we know that you bought these leggings maybe you want to buy some more um and so all of this data is being captured and used to Target ads to make Google more money she’s been lumped into a cohort that likes either Lululemon in general or this pair of leggings in particular they’re modeling the Journey of saying well she just bought some maybe she
00:33:53 doesn’t want to pay full price maybe she wants another deal um and they’re feeding all of this into their pmax AI which again you know in the ad world is something that’s really hard now for digital advertisers to op out of but it’s Google’s um AI modeling and uh completion of AD serving um so Google needs to to educate that on what ads are going to work best and remember this is how Google makes money Google needs the ads to work um so that they can feed money to their other projects including
00:34:26 search but more importantly for them probably including AI because AI is expensive so this is a guy named ed zitron um and he published an article called the subprime AI crisis in which he says that open AI needs to raise at least 3 billion but more like 10 billion to survive as it’s on course to lose 5 billion in 2024 a number that’s likely to increase as more complex models demand more compute and more training data with a moric CEO Dario uh amandi predicting that the future models May cost as much is A1 billion to train now
00:35:01 what we realize here is that those AI models those platforms don’t have a distributed processing option right they just don’t but Google does and so Google is likely looking at Chrome as a way to not just help them make more money on ads but potentially uh disperse the the processing uh needs of potential future AI systems and and you know for years uh this will be important if Google can do this into the future they will have uh the edge that they need in AI even if they don’t right now now let’s think
00:35:44 again about the loss of cookies this was a big threat to Google’s model um because uh you know cookies help ads work better but also Google is a heavy user of cookies themselves you can see I’ve sorted my own browsing um um based on who’s left the most cookies and it’s all Google properties that’s leaving the most cookies on my computer so they need it not only for their ad models to work but also uh for their own uh tracking and things like that to to work as it has before and they were hedging against
00:36:14 that the doj alleges that open bidding actually gave Google more insight into auctions it helped extract more fees and disintermediate rival ad exchanges from their own publishing customers so they were basically saying here The Verge says that Google is using open bidding their understanding of users based on their own cookies and they’re planning in for the future of a no cookie world to basically take people out at the knees take companies out at the knees disintermediate rival ad exchanges uh
00:36:49 from publishing customers by uh taking those customers making their ad models not work and pushing those websites away away from the ads that weren’t working towards uh ads on Google that are more likely to work so as Google kills ad supported businesses with things like The Helpful content ad Revenue shifts from um low Revenue ad models either with Google or with uh their advertising Rivals uh to higher margin ads that Google makes more money on I also think it’s important to remember that Google
00:37:26 forced everyone to switch from Google Universal analytics to Google analytics 4 this is a bit like the switch the four switch on Google search console but this was one where you had a year to transition your data or not and if the data wasn’t actively transitioned it was deleted forever um and if you think about how much data that was um that’s a lot of data that might have proven future cases against Google and the loss of clicks and then uh the increase need for Reliance on ads but you can also see
00:38:02 just by looking at your own Google analytics for data that that data doesn’t marry up that the new model in Google analytics is entirely different and sometimes kind of uh less useful than previous Google analytics I think Google analytics saw or Google saw that Google analytics was a way that Google would be basically telling on themselves so they said let’s delete as much data as possible with without raising a concern and let’s make the new data models not match up very well so that things are harder to prove right just
00:38:36 basically um kill as much data as you can and insert fear uncertainty and doubt into the rest and make it an inferior product so that it’s harder to really understand what’s going on and then of course in the end um Google did all of this data collection under the guise of not knowing how they’re going to monetize without cookies but after the OJ findings um they knew Google knew that they were be going to be under increased scrutiny um with all of their privacy violating behavior and so Google broke their
00:39:08 promise and said you know what we’ve talked for years about getting rid of third party cookies we’re just not going to do it and I think they decided that they weren’t going to do it because people were going to be more okay with cookies than they would be if they found out about the heavy and invasive uh tracking that’s happening in Chrome right now already so let’s wrap it up uh what I want you to remember if is that I have a good history of predicting what’s happening with Google and what’s important I was the first one to start
00:39:39 talking about mobile SEO I was the first one to start talking about Progressive web apps the first one to talk about entity first indexing when mobile first indexing launched that’s the topic understanding I was the first one to talk about fraggles something Google Now calls passages where Google was breaking up pages into smaller chunks of pages I was the first one to talk about uh Google’s mum Journeys which is their understanding of uh cross utility cross um Model Behavior Uh in a purchase uh
00:40:12 process and then last year I was the first one to warn against risky Behavior related to uh potential uh things uh like penalties that came out uh right at that same time in the helpful content update right I was saying be careful there is going to be Fallout and uh there in fact was so here’s what’s going on we need to stop believing everything Google is saying and taking their excuses um when they explain some kind of data and and assuming that that’s all they’re doing with the data we need to
00:40:44 stop carrying Google’s water we need to stop being Google apologists we need to look at the evidence that’s right in front of our faces because things aren’t as they see this is a Monumental shift in the understanding of how Google works and what they’re doing and why they’ve had so much success all the data that Google’s getting from processing our our information on our personal computers is making it easier for them to model demographic cohorts that they’re using for ad targeting and modeling and AI
00:41:14 training not just against us but against all of the People Like Us in the world all of the people who fit our cohort fit our shopping behaviors um or interested in the same things as us Google is gatekeeping the data that they’re collecting via Chrome which is illegal in the EU um and which has allowed them to build their business in an uh uncompetitive ways in the US it’s created the Monopoly and there’s potential for abuse here in the future um if Google starts to use Chrome for its own AI um
00:41:48 processing needs um as a way of beating its top competitors uh because they’re the only company that has this kind of model already in place uh for handling and mitigating the costs that are otherwise going to put most AI companies out of business or at least force them uh to reckon with the incredibly high costs uh that AI processing and generative AI systems have so um I know this all seems incredible uh but it’s true uh so you know it might not be aliens but it’s probably aliens thanks and if you have any questions feel free
00:42:27 to contact me I’m sux on Twitter uh or Cindy mobilo moyc if you like this video hit the thumbs up to be notified about future videos click the logo and to try the mobile Moxy mobile SEO tool set use the promo code YouTube to get 30 days free their mobile first SEO tools for a mobile first world
As the CEO and founder of Pubcon Inc., Brett Tabke has been instrumental in shaping the landscape of online marketing and search engine optimization. His journey in the computer industry has spanned over three decades and has made him a pioneering force behind digital evolution. Full Bio
Visit Pubcon.com