Another year, another MozCon. Full transparency: I’ve never been to a MozCon before this year. I’ve always wanted to go, but things just never lined up. This being my first time, I must admit I was overwhelmed. There were a ton of great speakers offering up some new insights and reaffirming things I already knew, but it’s always great to hear other, more experienced SEOs confirm what you thought you knew already.
There were so many speakers with so much information packed into a three-day conference, it was hard to capture every detail. So, let’s talk about what I feel are the top technical takeaways from MozCon 2018:
- The tactical side of technical SEO is the easiest portion of SEO to let a developer own, so we should do that and focus more on strategy and being part of the open source community to help shape the future of search.
- “Entity-indexing” is topic indexing and is the near future of search. Focus on using structured data to help search engines understand the entity relationships in your content.
- Write content for natural language processors. Utilize available tools to increase the salience of entities in relation to your topic by using keyword indicators.
- SEO is much more than you think.
- Machine learning is where we need to focus as technical SEOs, and there are a lot of free tools to help set up and run machine learning algorithms
Now that we have the gist of what topics were covered, I’d like to dive in a bit on each one of these in order.
This talk was from Jono Alderson at Yoast. His main argument was that there’s no reason we as technical SEOs can’t give away certain repetitive parts of our discipline. Things like addressing 404 errors, canonical tags, sitemaps, etc., can be handed over to a developer and included as part of their process flow for making changes to a website. Teaching them this can free up our time in the long run, so we can focus on other strategic thinking.
I agree with this wholeheartedly, though still feel that we need to be the overseer while these changes are being made. We also still need to be that translator between a client and their dev team. Like Jono, I spend most of my time going back over and fixing the same issues repeatedly, and this would allow me to provide better strategic plans. With Google’s partnership with WordPress, getting involved in the open source community can only make us better at our jobs and allow us to help shape the search industry. We should be actively communicating in their forums and providing suggestions on how they can make WordPress more SEO-friendly out of the box. We should be the tech leaders in our field and be that liaison between Google and everyone else looking to grow their businesses online.
This topic was covered by Cindy Krum from MobileMoxie. It wasn’t a totally new topic to me (I’ve read posts and talked about entity-indexing with other technical SEOs), but it was great to hear a bit more about it. The idea is that SEOs should be thinking more about indexing for entities and building content that contains entities that are highly salient to the overall topic of the content. This can be hard to understand, but, in a nutshell, stop thinking about keywords and focus more on the entity those keywords describe or relate to and optimize your content around that topic or entity.
My thoughts align with this. Long past are the days where including a keyword a certain number of times in content would make a difference or when also including synonyms made sense. I would first define the intent of a page, then research the topics/entities that would make sense to include in that content and start building it out.
Google has been using entities since the introduction of the Knowledge Graph. With so many searches providing Google SERP features back to the user (think Knowledge Graph, Quick Answers, etc.), it makes sense to optimize your content for “entity indexation.” Start marking everything up with structured data. This helps create that entity relationship and makes it easier for search engines to understand.
Write Content for Natural Language Processors and Improve Salience to Rank Better
This topic was covered by Stephanie Briggs from Briggsby and her talk really provided a more in-depth look at how we can utilize tools to help research content topics, write and optimize content better to help improve a client’s ranking.
Her talk focused on how the old ways of creating content and optimizing it for search don’t work anymore. You can no longer just add primary and supporting keywords to specific fields that search engines use to determine rank. It comes down to topic-based content creation and, within each topic, choosing the right words that have a high salience score in relation to the overall topic of the page. Think of salience as relevance. It’s a number that’s assigned to a word that measures how relevant the word is to the overall topic of the content it’s included in.
There are loads of tools out there, most of which are free, that can help you research keywords and topics. Google understands intent and you can see that with the Google Cloud Natural Language API. Typing in a sentence reveals what Google has identified as the entities and their salience scores. The example used in Stephanie’s presentation was “Queer Eye is the best show on television right now.” Google determined that “Queer Eye” was the primary entity and provided the Wikipedia page as a URL reference. It also determined that “show” was a reference to “Queer Eye.” Indicator words provide a higher salience score than synonyms or repeating the same main entity over and over again. It helps to identify those indicator words and include those when writing content.
Doing smarter research can lead to smarter content. Keyword and topic research should take time. It’ll lead to more robust content that users and search engines find more useful than that of your competitors who follow the same old rules.
SEO is Much More Than You Think
Google’s web-rendering service currently uses an outdated version of headless Chrome when it crawls the web, so features that are available in the most recent version of Chrome aren’t necessarily available to Googlebot. It’s imperative to adhere to progressive enhancement to make sure all users and search engines have a good experience with your content.
The talk then went deeper into the processes that happen on the backend when you search. More specifically, how information is retrieved, how it’s scored and how all that changes based on the search query and other conditions. Even with me being a technical SEO, this was a lot of new information and tips that I wasn’t privy to before this. One thing that I wasn’t using before, but will be looking into now, is the 304 HTTP header status. This handy header response basically tells search engines they don’t need to crawl a file because it hasn’t been modified since it’s last modification date. This increases crawl allocation because most files won’t change and therefore do not yet need to be crawled again, allowing Google to get deeper into your website. Overall, this was a great talk to attend and I walked away with more knowledge than I had when I went into it.
Machine Learning is Where We Need to Spend Our Time as Technical SEOs
This topic was covered by Britney Muller from Moz. The main point of her talk was that we, as technical SEOs, need to be paying more attention to machine learning. After all, Google is already using it in its algorithm.
She also talked a lot about different tools that you can use to run your own machine learning training and test sessions. TensorFlow was one that Britney mentioned that will let you run ML models and is completely free. There are also other tools that you can use to help run ML models. Google’s Colaboratory will let you run your machine learning models in a user-friendly front-end system, so I recommend using that. If you’re new to AI/ML, you can go through Google’s crash course. I’m just going through it now and find it extremely interesting.
In the second half of the talk, Britney described the different types of ML, including supervised vs. unsupervised, and then went on to talk about the three different types of models: clustering, regression and classification. She then went on to describe how artificial intelligence/machine learning is useful for SEO. Voice search is one area, but she also mentioned loads of other tasks that can potentially use ML to make our life easier, including creating meta descriptions, titles and other tasks that use a formulaic approach. ML is already in the wild and you can see that with Netflix recommendations, email inboxes and other various areas.
Overall, this talk was excellent. Britney took a very intimidating topic and made it incredibly digestible for people who don’t know a lot about ML. It also got me more interested in learning about this topic on my own.
Being a first-time MozCon attendee, I was surprised and excited to see all “my people” in the same room. I was able to talk to them on my level and that made it easier to have a conversation. The speakers were all smart and brought a lot of information. It was a lot to take in over three days, but I walked away feeling more confident about my own knowledge, as well as having learned a lot of new things that have pushed me to dive deeper into new areas like ML. As a first-time attendee, I would encourage anyone new or seasoned who wants to learn more about SEO to attend this conference. It’ll be worth the investment.