Jump to content

Recommended Posts

Posted

I'm actually spending (loosing?) lots of time to try to understand Google Search. Sorry, for posting all these questions here, but no idea where else to find help. Thanks in advance for any answer and suggestions.

 

1. Google seems to see my site as 2 different sites, one with "www" and the other without. Thus, pages indexed appear the one belonging to the "www" site, the other not and as result of this nearly as much duplicates as valid pages in Google Search Console. Would creating a sitemap with all "www" entries set all pages

canonical with "www" and eliminate the duplicates? Should I use full URLs instead of relative links on my pages? Should I perhaps create a redirect from http://streetinfo.lu/... to http://www.streetinfo.lu/...?

 

2. My homeless site appears on page 1 of Google search results, but not with the entry page, but a secondary, not often changing page. May this be in relationship that the "www" version of the entry page is considered by Google as a duplicate (the "non-www" having been chosen as canonical)? Should I use an "alternate" in the secondary page metadata pointing to the entry page as canonical? Or would the sitemap perhaps resolve the problem?

 

3. Similar situation and same questions concerning 2 articles about the same social institution. Google displays the article from last year in search results; the one of this year not appearing there, or more correctly only appears there if further key words have been added (in Google search).

 

4. My PDF download files are not indexed and result in a "Something went wrong" when I try to index them in Google Search Console. May this be due to the fact, that they don't contain metadata?

 

Hoping that someone understands more about these things than I do. And has the time to help. Thanks.

Posted

1. Either way would work. Many prefer the www version and redirect all to http://www.streetinfo.lu, others prefer no www. Also, Google gives preference to secure sites, so supporting HTTPS would be a good idea.

 

2. That's usually based on traffic. I had that happen with mine as well. My site listing originally only had the download page for a specific software program I publish, and it was because the one program was what most people were interested in. There's not much you can do about this. Be sure to have a sitemap and links so Google can find its way around. It'll eventually pull in all the other content, though no guarantees on the order its presented in.

 

3. That sounds like poor ranking, probably due to lack of traffic and its young age. Odds are by next year, this one will be showing and the 2020 one won't.

 

4. You can't index PDF files through the search console. Adding metadata will help, but the files themselves will index naturally over time provided they have content that Google can read (i.e. text). If they're all pictures, they won't index unless you add metadata.

 

It took months for my site to show on Google decently when its specifically searched for, and new content often takes weeks to show up at all. Having lots of organic traffic, HTTPS support, a site map, and quality content that's indexable (i.e. text) is your main concern. It goes a lot further than snake oil SEO methods.

 

Finally, never use those "bulk search submission" sites. Back in the day these blackhat SEO methods were sort-of-OK, but nowadays it does nothing at best and hurts your ratings at worst.

Posted

Thanks for having taken all this time for me. Anyway, with or without Google, those, who should read my articles, never will do so. Peope, who say what they think and tell how my country and its citizens really are, are all but not well seen here. Thanks again.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...