In 2008 I created my first Internet project.
It was an online electronics store that needed promotion.
Initially, the promotion work was handed over to the programmers who created it.
They compiled a list of keys in 5 minutes: mobile phones, camcorders, cameras, iPhones, Samsungs - all categories and products on the site.
These were common names that did not at all resemble a properly composed semantic core.
A long period passed without results.
Incomprehensible reports forced to look for performers specializing in website promotion.
I found a local company, entrusted them with the project, but even here everything was to no avail.
Then the understanding came that real professionals should be engaged in promotion.
After reading a lot of reviews, I found one of the best freelancers who assured me of success.
Six months later, no results again.
It was the lack of results in organic for two years that led me to SEO.
Subsequently, this became the main vocation.
Now I understand what went wrong in my initial promotion.
These mistakes are repeated by the bulk of even experienced SEO specialists who have spent more than one year promoting websites.
The misses were in the wrong work with keywords.
In fact, there was no understanding of what we are promoting.
There is no time to collect the semantic core, quickly fill in the contact details.
The semantic core (abbreviated as SN) is a specific list of keywords that describe the theme of the site as much as possible.
In order to assemble the SA, you will need to assemble sets of keywords. In this regard, you need to evaluate your strength in relation to the promotion of high- and mid-frequency queries. If you want to get the most visitors on a budget, you need to use high- and mid-frequency queries. If vice versa, then medium- and low-frequency requests.
Even if you have a high budget, it makes no sense to promote the site only for high-frequency queries. Often, such requests are too general and have an unspecified semantic load, for example, “listen to music”, “news”, “sports”.
When choosing search queries, they analyze a set of indicators that correspond to the search phrase:
In practice, the selection of the semantic core can be carried out by the following methods:
Competitor websites can become the source of keywords for the semantic core. This is where you can quickly pick up keywords, as well as determine the frequency of their "environment" using semantic analysis. To do this, it will be necessary to make a semantic assessment of the text page, the most mentioned words make up the morphological core;
We recommend creating your own semantic core based on the statistics of special services. Use, for example, Wordstat Yandex - the statistical system of the Yandex search engine. Here you can see the frequency of the search query, as well as find out what users are searching for along with this keyword;
"Hints" systems appear when you try to enter a search phrase in the interactive line. These words and phrases can also enter the SL as connected ones;
Closed databases of search queries, for example, Pastukhov's database, can become the source of keywords for CL. These are special data arrays containing information about effective combinations of search queries;
Internal site statistics can also become a source of data about search queries that are of interest to the user. It contains information about the source and knows where the reader came from, how many pages they viewed and what browser they came from.
Yandex.Wordstat- a popular free tool used in compiling a semantic core. Using the service, you can find out how many times visitors entered a specific query into the Yandex search engine. Provides an opportunity to analyze the dynamics of demand for this query by months.
Google AdWords is one of the most used systems for leaving the semantic core of the site. With the help of the Google Keyword Planner, you can calculate and forecast the impressions of specific queries in the future.
Yandex.Direct many developers use the most profitable keywords for selection. If in the future it is planned to place advertisements on the site, then the owner of the resource with this approach will receive a good profit.
Slovoeb- the younger brother of Kay Collector, who is used to compile the semantic core of the site. Data from Yandex is taken as a basis. Among the advantages, one can note an intuitive interface, as well as accessibility not only for professionals, but also for beginners who are just starting to engage in SEO analytics.
Base Pastukhov according to many experts have no competitors. The database displays such requests that neither Google nor Yandex shows. There are many other features inherent in Max Pastukhov's databases, among which a convenient software shell can be noted.
SpyWords- An interesting tool that allows you to analyze the keywords of competitors. With its help, you can conduct a comparative analysis of the semantic cores of the resources of interest, as well as get all the data on the PPC and SEO companies of competitors. The resource is Russian-speaking, it will not be any problem to deal with its functionality.
A paid program created specifically for professionals. Helps to compose the semantic core by identifying relevant queries. It is used to estimate the cost of promoting a resource for the keywords of interest. In addition to a high level of efficiency, this program compares favorably with ease of use.
SEMrush allows you to determine the most effective keywords based on data from competing resources. It can be used to select low-frequency queries characterized by a high level of traffic. As practice shows, for such requests it is very easy to promote the resource to the first positions of the issue.
SeoLib- a service that has won the trust of optimizers. Has a lot of functionality. Allows you to competently compose a semantic core, as well as perform the necessary analytical measures. In free mode, you can analyze 25 requests per day.
promoter allows you to assemble the primary semantic core in just a few minutes. This is a service used mainly for the analysis of competing sites, as well as for the selection of the most effective key queries. Word analysis is selected for Google in Russia or for Yandex in the Moscow region.
The semantic core is assembled fairly quickly if sources and databases are used as a hint.
According to the content of the site and relevant topics, key queries are selected that most accurately reflect the semantic load of your web portal.
- From the selected set, superfluous ones are eliminated, possibly those queries that can worsen the indexing of the resource. Keyword filtering is carried out based on the results of the analysis described above.
- The resulting semantic core should be evenly distributed between the pages of the site, if necessary, texts with a specific theme and volume of keywords are ordered.
For example, you are promoting a nail salon in Moscow.
We think and select all kinds of words that fit the theme of the site.
Pedicure;
- manicure;
- nail extension.
Now we go to the Yandex service and enter each request, after selecting the region in which we are going to move.
We copy all the words in Excel from the left column, plus auxiliary phrases from the right.
We remove unnecessary words that do not fit the topic. The words that match are highlighted in red below.
The figure of 2320 requests shows how many times people typed this request not only in its pure form, but also as part of other phrases. For example: manicure and price in Moscow, price for manicure and pedicure in Moscow, etc.
If you enter our query in quotation marks, then there will already be another figure, which takes into account the word forms of the key phrase. for example: manicure prices, manicure prices, etc.
If you enter the same query query in quotation marks with exclamation marks, we will see how many times users typed the query "manicure price".
Next, we break down the resulting list of words into pages of the site. So, for example, we will leave high-frequency requests on the main page and on the main sections of the site, such as: manicure, nail service studio, nail extension. We will distribute the mid- and low-frequency frequencies on the rest of the pages, for example: manicure and pedicure prices, gel nail extension design. Words should also be divided into groups according to meaning.
When compiling a semantic core, no one is immune from errors. The most common include the following:
We wish you good luck in promoting your site!
The semantic core of the site is a complete set of keywords corresponding to the subject of the web resource, by which users can find it in the search engine.
More videos on our channel - learn internet marketing with SEMANTICA
For example, the fairy-tale character Baba Yaga will have the following semantic core: Baba Yaga, Baba Yaga fairy tales, Baba Yaga Russian fairy tales, a woman with a fairy tale mortar, a woman with a mortar and a broom, an evil sorceress woman, a woman's hut, chicken legs, etc.
Before starting work on promotion, you need to find all the keys by which targeted visitors can search for it. Based on the semantics, a structure is compiled, keys are distributed, meta tags, document titles, descriptions for images are prescribed, and an anchor list is developed for working with the reference mass.
When compiling semantics, you need to solve the main problem: determine what information should be published in order to attract a potential client.
Compiling a list of keys solves another important task: for each search phrase, you define a relevant page that can fully answer the user's question.
This problem is solved in two ways:
According to statistics, 60-80% of all phrases and words are LF. Working with them is cheaper and easier. Therefore, you must make the most voluminous core of phrases, which will be constantly supplemented with new low frequencies. HF and MF should also not be ignored, but the main focus should be on expanding the list of woofers.
It is necessary to highlight the main terms of your business and user needs. For example, laundry customers are interested in laundry and cleaning.
Then you should define the tails and specification (more than 2 words per query) that users add to the main terms. By doing this, you will increase the reach of the target audience and reduce the frequency of terms (washing blankets, washing jackets, etc.).
Use this method as an additional method to determine the correct choice of a particular short circuit. BuzzSumo, Searchmetrics, SEMRush, Advse tools will help you with this.
Consider some of the most popular services.
First of all, you need to define the concepts of "keywords", "keys", "key or search queries" - these are words or phrases with which potential customers of your site are looking for the necessary information.
Make the following lists: categories of goods or services (hereinafter referred to as TU), TU names, their brands, commercial tails (“buy”, “order”, etc.), synonyms, transliteration in Latin (or in Russian, respectively), professional jargon (“keyboard” - “clave”, etc.), technical characteristics, words with possible typos and errors (“Orenburg” instead of “Orenburg”, etc.), references to the area (city, streets, etc. .).
When working with lists, be guided by the short list from the promotion agreement, the structure of the web resource, information, price lists, competitor sites, previous SEO experience.
Proceed to the selection of semantics by mixing the phrases selected in the previous step, using the manual method or using services.
Generate a list of stop words and remove unsuitable short circuits.
Group CVs by relevant pages. For each key, the most relevant page is selected or a new document is created. It is advisable to carry out this work manually. For large projects, paid services such as Rush Analytics are provided.
Go from largest to smallest. First distribute the treble across the pages. Then do the same with the midrange. Bass can be added to pages with treble and bass distributed over them, as well as select individual pages for them.
After analyzing the first results of the work, we can see that:
When grouping short circuits, work with all possible sections on a web resource, fill each page with useful information, do not create duplicate text.
At the same time, ranking worsens, the site can be punished for spamming, and if the web resource has the wrong structure, then it will be very difficult to promote it.
It doesn't matter how you choose the semantics. With the right approach, you will get the right SEO, which is necessary for the successful promotion of the site.
Quick navigation on this page:
Like almost all other webmasters, I create a semantic core using the KeyCollector program - this is by far the best program for compiling a semantic core. How to use it is a topic for a separate article, although the Internet is full of information on this subject - I recommend, for example, a manual from Dmitry Sidash (sidash.ru).
Since the question was raised about the example of compiling the kernel, I give an example.
Suppose we have a site dedicated to British cats. I drive the phrase "British cat" into the "List of phrases" and click on the "Parse" button.
I get a long list of phrases, which will begin with the following phrases (the phrase and particularity are given):
british cats 75553 british cats photo 12421 british fold cat 7273 british fold cattery 5545 cats of british breed 4763 british shorthair cat 3571 colors of british cats 3474 british cats price 2461 cat blue british 2302 british fold cat photo 2224 mating 18 british cats 18 buy a cat 1179 british cats 1179 longhair british cat 1083 pregnancy of british cat 974 cat british chinchilla 969 cats of british breed photo 953 cattery of british cats moscow 886 color of british cats photo 882 british cats care 855 british shorthair cat photo 840 scottish and british cats 763 names of british cats 762 British blue cat photo 723 photo of British blue cat 723 British black cat 699 what to feed British cats 678
The list itself is much longer, I just gave the beginning of it.
Based on this list, I will have articles on the site about the varieties of cats (lop-eared, blue, short-haired, long-haired), there will be an article about the pregnancy of these animals, about what to feed them, about names, and so on down the list.
For each article, one main such query is taken (= the topic of the article). However, the article is not limited to only one request - other queries that are suitable in meaning are also added to it, as well as different variations and word forms of the main query, which can be found in the Key Collector below in the list.
For example, with the word "lop-eared" there are the following keys:
British Fold cat 7273 British Fold cat photo 2224 British Fold cat price 513 cat photo 129 british fold cats character 112 british fold cat grooming 112 mating of british fold cats 98 british shorthair fold cat 83 color of british fold cats 79
In order to avoid overspam (and overspam can also be due to the combination of using too many keys in the text, in the title, in, etc.), I would not take them all with the inclusion of the main query, but individual words from them make sense use in the article (photo, buy, character, care, etc.) in order for the article to rank better for a large number of low-frequency queries.
Thus, under the article about lop-eared cats, we will form a group of keywords that we will use in the article. Groups of keywords for other articles will be formed in the same way - this is the answer to the question of how to create the semantic core of the site.
There is also an important point related to the exact frequency and competition - they must be collected in the Key Collector. To do this, select all queries with checkboxes and on the "Yandex.Wordstat frequencies" tab, click the "Collect frequencies "!" - the exact particularity of each phrase will be shown (that is, with this word order and in this case), this is a much more accurate indicator than the overall frequency.
To check the competition in the same Key Collector, you need to click the "Get data for Yandex PS" (or for Google), then click "Calculate KEI from the available data". As a result, the program will collect how many main pages for this query are in the TOP-10 (the more - the more difficult it is to get there) and how many pages in the TOP-10 contain such a title (similarly, the more - the more difficult it is to break into the top).
Then we need to act on the basis of our strategy. If we want to create a comprehensive site about cats, then the exact frequency and competition are not so important to us. If we only need to publish a few articles, then we take queries that have the highest frequency and at the same time the lowest competition, and write articles based on them.
The semantic core is a scary name that SEOs have come up with to refer to a fairly simple thing. We just need to select the key queries for which we will promote our site.
And in this article, I will show you how to properly compose a semantic core so that your site quickly reaches the TOP, and does not stagnate for months. Here, too, there are "secrets".
And before we move on to compiling the SA, let's look at what it is, and what we should eventually come to.
Oddly enough, but the semantic core is a regular excel file, in which the list contains key queries for which you (or your copywriter) will write articles for the site.
For example, here is how my semantic core looks like:
I have marked in green those key queries for which I have already written articles. Yellow - those for whom I am going to write articles in the near future. And colorless cells mean that these requests will come a little later.
For each key request, I have determined the frequency, competitiveness, and invented a "catchy" title. Here is approximately the same file you should get. Now my SL consists of 150 keywords. This means that I am provided with “material” for at least 5 months in advance (even if I write one article a day).
A little lower we will talk about what you should prepare for if you suddenly decide to order the collection of a semantic core from specialists. Here I will say briefly - they will give you the same list, but only for thousands of "keys". However, in SA it is not the quantity that matters, but the quality. And we will focus on this.
But really, why do we need this torment? You can, in the end, just write high-quality articles just like that, and attract an audience with this, right? Yes, you can write, but you can’t attract.
The main mistake of 90% of bloggers is just writing high-quality articles. I'm not kidding, they have really interesting and useful materials. But search engines don't know about it. They are not psychics, but just robots. Accordingly, they do not put your article in the TOP.
There is another subtle point here with the title. For example, you have a very high-quality article on the topic "How to do business in the" muzzle book ". There you describe everything about Facebook in great detail and professionally. Including how to promote communities there. Your article is the most high-quality, useful and interesting on the Internet on this topic. No one was lying next to you. But it still won't help you.
Imagine that your site was visited not by a robot, but by a live checker (assessor) from Yandex. He understood that you have the coolest article. And the hands put you in first place in the search results for the query "Community promotion on Facebook."
Do you know what will happen next? You will be out of there very soon. Because no one will click on your article, even in the first place. People enter the query "Community promotion on Facebook", and your headline is "How to do business in the" muzzle book ". Original, fresh, funny, but ... not on demand. People want to see exactly what they were looking for, not your creative.
Accordingly, your article will empty take a place in the TOP of the issue. And a living assessor, an ardent admirer of your work, can beg the authorities for as long as he likes to leave you at least in the TOP-10. But it won't help. All the first places will be occupied by empty, like husks from seeds, articles that were copied from each other by yesterday's schoolchildren.
But these articles will have the correct “relevant” title - “Community promotion on Facebook from scratch” ( step by step, 5 steps, from A to Z, free etc.) It's a shame? Still would. Well, fight against injustice. Let's make a competent semantic core so that your articles take the well-deserved first places.
There is one more thing that for some reason people don't think much about. You need to write articles often - at least every week, and preferably 2-3 times a week to get more traffic and faster.
Everyone knows this, but almost no one does it. And all because they have “creative stagnation”, “they can’t force themselves”, “just laziness”. But in fact, the whole problem is precisely in the absence of a specific semantic core.
I entered one of my basic keys — “smm” into the search field, and Yandex immediately gave me a dozen hints about what else might be of interest to people who are interested in “smm”. I just have to copy these keys into a notebook. Then I will check each of them in the same way, and collect clues on them as well.
After the first stage of collecting CL, you should get a text document, which will contain 10-30 wide base keys, with which we will work further.
Of course, if you write an article for the query "webinar" or "smm", then a miracle will not happen. You will never be able to reach the TOP for such a broad query. We need to break the base key into many small queries on this topic. And we will do this with the help of a special program.
I use KeyCollector but it's paid. You can use a free analogue - the SlovoEB program. You can download it from the official site.
The most difficult thing in working with this program is to set it up correctly. How to properly set up and use Slovoeb I show. But in that article, I focus on the selection of keys for Yandex-Direct.
And here let's take a look at the features of using this program for compiling a semantic core for SEO step by step.
First we create a new project and name it according to the broad key you want to parse.
I usually give the project the same name as my base key so I don't get confused later. And yes, I will warn you against another mistake. Don't try to parse all base keys at the same time. Then it will be very difficult for you to filter out “empty” key queries from golden grains. Let's parse one key at a time.
After creating the project, we carry out the basic operation. That is, we actually parse the key through Yandex Wordstat. To do this, click on the "Worstat" button in the program interface, enter your base key, and click "Start collecting".
For example, let's parse the base key for my blog "contextual advertising".
After that, the process will start, and after a while the program will give us the result - up to 2000 key queries that contain "contextual advertising".
Also, next to each request there will be a “dirty” frequency - how many times this key (+ its word forms and tails) was searched per month through Yandex. But I do not advise you to draw any conclusions from these figures.
Dirty frequency will not show us anything. If you focus on it, then do not be surprised later when your key for 1000 requests does not bring a single click per month.
We need to find the net frequency. And for this, we first select all the found keys with checkmarks, and then click on the Yandex Direct button and start the process again. Now Slovoeb will look for the exact request frequency per month for each key.
Now we have an objective picture - how many times what request was entered by Internet users over the past month. Now I propose to group all key queries by frequency, so that it would be more convenient to work with them.
To do this, click on the "filter" icon in the "Frequency "!" ”, and specify to filter out keys with the value “less than or equal to 10”.
Now the program will show you only those requests, the frequency of which is less than or equal to the value "10". You can delete these queries or copy them for the future to another group of keywords. Less than 10 is very low. Writing articles for these requests is a waste of time.
Now we need to choose those keywords that will bring us more or less good traffic. And for this we need to find out one more parameter - the level of competition of the request.
All "keys" in this world are divided into 3 types: high-frequency (HF), mid-frequency (MF), low-frequency (LF). And they can also be highly competitive (VC), medium competitive (SC) and low competitive (NC).
As a rule, HF requests are simultaneously VC. That is, if a query is often searched on the Internet, then there are a lot of sites that want to advance on it. But this is not always the case, there are happy exceptions.
The art of compiling a semantic core lies precisely in finding such queries that have a high frequency, and their level of competition is low. Manually determining the level of competition is very difficult.
You can focus on indicators such as the number of main pages in the TOP-10, the length and quality of texts. the level of trust and sites in the TOP of the issue on request. All of this will give you some idea of how tough the competition for positions is for this particular request.
But I recommend that you use service Mutagen. It takes into account all the parameters that I mentioned above, plus a dozen more, which neither you nor I probably even heard of. After analysis, the service gives the exact value - what is the level of competition for this request.
Here I checked the request "setting up contextual advertising in google adwords". The mutagen showed us that this key has a concurrency of "more than 25" - this is the maximum value that it shows. And this request has only 11 views per month. So it doesn't suit us.
We can copy all the keys that we picked up in Slovoeb and do a mass check in Mutagen. After that, we will only have to look through the list and take those requests that have a lot of requests and a low level of competition.
Mutagen is a paid service. But you can do 10 checks per day for free. In addition, the cost of verification is very low. For all the time I have worked with him, I have not yet spent even 300 rubles.
By the way, at the expense of the level of competition. If you have a young site, then it is better to choose queries with a competition level of 3-5. And if you have been promoting for more than a year, then you can take 10-15.
By the way, at the expense of the frequency of requests. We now need to take the final step, which will allow you to attract a lot of traffic even for low-frequency queries.
As has been proven and verified many times, your site will receive the bulk of traffic not from the main keys, but from the so-called “tails”. This is when a person enters strange key queries into the search box, with a frequency of 1-2 per month, but there are a lot of such queries.
To see the "tail" - just go to Yandex and enter your chosen key query in the search bar. Here's what you'll see.
Now you just need to write out these additional words in a separate document, and use them in your article. At what it is not necessary to put them always next to the main key. Otherwise, search engines will see "re-optimization" and your articles will fall in the search results.
Just use them in different places in your article, and then you will receive additional traffic from them as well. I would also recommend that you try to use as many word forms and synonyms as possible for your main key query.
For example, we have a request - "Setting up contextual advertising". Here's how you can reformulate it:
You never know exactly how people will look for information. Add all these additional words to your semantic core and use when writing texts.
So, we collect a list of 100 - 150 keywords. If you are compiling a semantic core for the first time, then it may take you several weeks to complete it.
Or maybe break his eyes? Maybe there is an opportunity to delegate the compilation of CL to specialists who will do it better and faster? Yes, there are such specialists, but it is not always necessary to use their services.
By and large, specialists in compiling a semantic core will only take you steps 1 - 3 of our scheme. Sometimes, for a large additional fee, they will also take steps 4-5 - (collecting tails and checking the competition of requests).
After that, they will give you several thousand key queries with which you will need to work further.
And the question here is whether you are going to write articles yourself, or hire copywriters for this. If you want to focus on quality, not quantity, then you need to write it yourself. But then it won't be enough for you to just get a list of keys. You will need to choose those topics that you understand well enough to write a quality article.
And here the question arises - why then do we actually need specialists in SA? Agree, parsing the base key and collecting the exact frequencies (steps #1-3) is not at all difficult. It will take you literally half an hour.
The most difficult thing is to choose high-frequency requests that have low competition. And now, as it turns out, you need HF-NC, on which you can write a good article. This is exactly what will take you 99% of the time working on the semantic core. And no specialist will do this for you. Well, is it worth spending money on ordering such services?
Another thing is if you initially plan to attract copywriters. Then you do not need to understand the subject of the request. Your copywriters will also not understand it. They will simply take a few articles on this topic and compile “their” text from them.
Such articles will be empty, miserable, almost useless. But there will be many. On your own, you can write a maximum of 2-3 quality articles per week. And the army of copywriters will provide you with 2-3 shitty texts a day. At the same time, they will be optimized for requests, which means they will attract some kind of traffic.
In this case, yes, calmly hire SA specialists. Let them also draw up TK for copywriters at the same time. But you understand, it will also cost some money.
Let's go over the main ideas in the article again to consolidate the information.
I hope this guide was helpful to you. Save it to your favorites so as not to lose it, and share it with your friends. Don't forget to download my book. There I show you the fastest way from zero to the first million on the Internet (squeezed from personal experience over 10 years =)
See you later!
Your Dmitry Novoselov