Since March of 2019, Google seems to be having a lot of updates. After the June 2019 update, which we’ve talked about here extensively, they seem to be hitting every few weeks. The latest one (nicknamed Maverick by Webmaster World’s Brett Tabke) has people perplexed as to just what is going on – or even whether anything is going on. Search Engine Roundtable knows that it’s something… but not sure what.

Danny Sullivan at Google said that they’d probably be announcing future core updates ahead of time, but we’ve seen no such announcements.

I think I know what’s happening here – and I’ve been talking about it from day one of this blog’s creation six weeks ago. Since I know a lot of my readers are not SEO people, before we can explore the answers, we need to look at the questions…

What is a Google Core Algorithm Update?

Google’s algorithm is, in general terms, the mathematical formula used to rank pages on their search engine. In order to provide the most relevant results, Google needs to adjust its formula to give more or less importance to certain things while maybe even adding in all new factors to make it more accurate. When a core update is applied, it needs to affect the data for virtually every single page on the web to give it the values needed for the algo to work. There are a lot of bits and bytes moving around while Googles index of pages are updated to handle the new math (or rather, be handled by the new math).

What Is Everflux?

Since I’m going to be calling this Everflux 2.0, we should look at what that is.

Everflux is a term I came up with at Webmaster World (in this brief post) when Google first introduced its “fresh index.” Before that, the results would stay pretty much the same for a whole month or so, then Google would “dance” while the index was updated. When the Fresh Results started showing up, Google started to regularly inject new content like news articles and new pages it discovered on the web every day. With new pages being introduced all the time, the results would always be fluctuating as things moved in and out and up and down in the rankings. For a while, the term Everflux was in common use around WMW and around the web.

In more recent years, Google has been crawling all pages and adding the new or updated versions of everything very quickly after they are discovered. Everflux was then how “all” of Google worked and not just referring to the minty fresh results anymore, so it became a fairly useless term.

Entities and Framed Knowledge are Modular

Modular EntitiesIf you’ve been playing along at home and following this blog like you should, none of these things should be new to you. Entities are things which have known attributes that Google understands and can frame around a specific set of data to create a knowledge graph around that subject. Google’s RankBrain assists on the front end of search (in understanding the words you type in within their own frames of knowledge, and helps to create frames which don’t exist) but that’s impractical to do on the fly when it comes to the results side of things. There’s one search term in play, but potentially millions of pages in play for any given term.

While some of this is speculation, I’m usually pretty good at this sort of thing. My educated guess here is that, much like Google did when it introduced “fresh results” into the SERPs over a decade ago, that it’s now introducing fresh (and expanded/or influenced) entities into this mix at certain intervals.

Each page on the web likely has a set of identified entities on it. Those entities (and the knowledge graph created by them) stand independent of the pages themselves but are used to help rank the pages when one of the entities is called for by a search term. Entities may be connected to one another, but they can each stand independently as well. This makes them great opportunities to introduce new information at a different level than what we normally think of when we think of how Google works.

What is Everflux 2.0?

I am almost certain that these fluctuations that keep getting reported as new algorithm updates are not updates to that part of it at all. What we’re seeing now are more like the things Matt Cutts would have called “Data Refresh” or “Index Update” in the olden days. (Or more accurately, I think, I combination of the two.) This most recent batch of actual core updates (starting with Medic last fall and continuing through to the June 2019 update last month) were all about setting up that third element – that ties the front end search interpreter and the indexed results together through the framed knowledge graphs created by Google using entities rather than keywords. The “math” isn’t changing. The content of the page isn’t changing.

What is changing is how Google understands these things. It’s the entities, attributes, and framed knowledge around these entities that are in play. I know of several sites affected by the June (and earlier) updates have been taking my advice to build better citations reputable sources to heart. When these citations come in (especially if several other sites are saying it, too) they can influence the attributes of an entity. (I talked about this a bit more in the entities post linked to above). This new understanding of an entity needs to be tested and validated by other sources, so it can’t be introduced immediately upon discovery.

It does, obviously, need to be introduced at some point, though. And these roughly bi-weekly update announcements are likely either testing of entity validation or the actual act of introducing these new or modified entities into the system to help rank things.

What Does This All Mean?

This all means that, if I’m correct in my assumptions here, the things we’ve been teaching here have been useful and that I’m going to continue to try to break things down for everyone – especially those of you in the small to medium business sectors who don’t get things explained to them all to often.

It all makes sense, though. We know for certain that Google is using entities to describe things and build knowledge graphs. What we don’t know is exactly how they are being introduced to the index. Since they have to be verified as the machine learning applications build out the frames of knowledge about these things, they can’t just go in instantly like a new web page. And they can’t be a part of the “core” algorithm either. Why would they be?

The sites reporting the biggest shuffles are still the ones with the entities that subject to the largest ability to be influenced. Facts are facts, but much of science, health, politics, and news are matters of opinion and it’s harder to discern the prevailing winds and understand what is fact, what is opinion, and what is a load of bunk. As the entities and knowledge graphs build themselves out by our efforts and the efforts of those in the same industry, Google will get better at framing this knowledge and the results will start to become more constant.

If you want to keep up on all this stuff and the other great information I have for online business owners and web professionals alike, don’t forget to use the links in the sidebar (or below this on your phone) to follow me on Facebook, Twitter, and LinkedIn. Feel free to post your thoughts in the comments below and we’ll see you around the interwebs! Thanks for dropping in!

How useful was this post?

Click on a thumb to rate it!

Average rating / 5. Vote count:

As you found this post useful...

Share it with your friends!

Share It!

Leave a Reply

Your email address will not be published. Required fields are marked *