Danny Sullivan at Google said that they’d probably be announcing future core updates ahead of time, but we’ve seen no such announcements.
I think I know what’s happening here – and I’ve been talking about it from day one of this blog’s creation six weeks ago. Since I know a lot of my readers are not SEO people, before we can explore the answers, we need to look at the questions…
What is a Google Core Algorithm Update?
Google’s algorithm is, in general terms, the mathematical formula used to rank pages on their search engine. In order to provide the most relevant results, Google needs to adjust its formula to give more or less importance to certain things while maybe even adding in all new factors to make it more accurate. When a core update is applied, it needs to affect the data for virtually every single page on the web to give it the values needed for the algo to work. There are a lot of bits and bytes moving around while Googles index of pages are updated to handle the new math (or rather, be handled by the new math).
What Is Everflux?
Since I’m going to be calling this Everflux 2.0, we should look at what that is.
Everflux is a term I came up with at Webmaster World (in this brief post) when Google first introduced its “fresh index.” Before that, the results would stay pretty much the same for a whole month or so, then Google would “dance” while the index was updated. When the Fresh Results started showing up, Google started to regularly inject new content like news articles and new pages it discovered on the web every day. With new pages being introduced all the time, the results would always be fluctuating as things moved in and out and up and down in the rankings. For a while, the term Everflux was in common use around WMW and around the web.
In more recent years, Google has been crawling all pages and adding the new or updated versions of everything very quickly after they are discovered. Everflux was then how “all” of Google worked and not just referring to the minty fresh results anymore, so it became a fairly useless term.
Entities and Framed Knowledge are Modular
While some of this is speculation, I’m usually pretty good at this sort of thing. My educated guess here is that, much like Google did when it introduced “fresh results” into the SERPs over a decade ago, that it’s now introducing fresh (and expanded/or influenced) entities into this mix at certain intervals.
Each page on the web likely has a set of identified entities on it. Those entities (and the knowledge graph created by them) stand independent of the pages themselves but are used to help rank the pages when one of the entities is called for by a search term. Entities may be connected to one another, but they can each stand independently as well. This makes them great opportunities to introduce new information at a different level than what we normally think of when we think of how Google works.
What is Everflux 2.0?
What is changing is how Google understands these things. It’s the entities, attributes, and framed knowledge around these entities that are in play. I know of several sites affected by the June (and earlier) updates have been taking my advice to build better citations reputable sources to heart. When these citations come in (especially if several other sites are saying it, too) they can influence the attributes of an entity. (I talked about this a bit more in the entities post linked to above). This new understanding of an entity needs to be tested and validated by other sources, so it can’t be introduced immediately upon discovery.
It does, obviously, need to be introduced at some point, though. And these roughly bi-weekly update announcements are likely either testing of entity validation or the actual act of introducing these new or modified entities into the system to help rank things.
What Does This All Mean?
This all means that, if I’m correct in my assumptions here, the things we’ve been teaching here have been useful and that I’m going to continue to try to break things down for everyone – especially those of you in the small to medium business sectors who don’t get things explained to them all to often.
It all makes sense, though. We know for certain that Google is using entities to describe things and build knowledge graphs. What we don’t know is exactly how they are being introduced to the index. Since they have to be verified as the machine learning applications build out the frames of knowledge about these things, they can’t just go in instantly like a new web page. And they can’t be a part of the “core” algorithm either. Why would they be?
The sites reporting the biggest shuffles are still the ones with the entities that subject to the largest ability to be influenced. Facts are facts, but much of science, health, politics, and news are matters of opinion and it’s harder to discern the prevailing winds and understand what is fact, what is opinion, and what is a load of bunk. As the entities and knowledge graphs build themselves out by our efforts and the efforts of those in the same industry, Google will get better at framing this knowledge and the results will start to become more constant.