Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

September 28 2013

Recherche de flux Rss et Atom (MAJ)

Recherche de flux Rss et Atom (MAJ)
http://www.dsfc.net/internet/moteurs-internet/moteurs-recherche-flux-rss-atom

http://www.dsfc.net/dsfc.jpg

Bing réintroduit la recherche de fils RSS avec l’opérateur feed : ! Dsfc Autour du sujet : Pourquoi Google veut tuer le RSS ! Nombre d’articles dans les flux RSS et impact sur le trafic #SEO : les SERP vues des pays étrangers La recherche Qwant-ique et l’infiniment petit ! La recherche Twitter

July 24 2013

Comprendre l'Author Rank, les balises rel="author" et rel="publisher"

Comprendre l’Author Rank, les balises rel="author" et rel="publisher"
http://www.ya-graphic.com/2013/02/author-rank-rel-publisher-et-rel-author

Aider #google à nous promouvoir ou promouvoir Google Plus ? Tags : #seo google

January 05 2012

April 15 2011

Search Notes: More scrutiny for Google, more share for Bing

This week, worldwide courts continue their interest in Google while Bing is edging up in market share. That may actually be good news for Google as they fight antitrust allegations.

Google and privacy and governments

GoogleI've written in this column before about both U.S. and international courts looking at all aspects of Google, including antitrust and citizen privacy. That scrutiny continues. The Justice Department has given the go-ahead to Google's acquisition of travel technology company ITA, but the FTC has also instituted conditions to prevent the acquisition from substantially lessening competition. Google agreed to the terms and closed the deal on April 12.

This could pave the way for a FTC antitrust investigation, however. It remains to be seen if the FTC will see the concessions stipulated by the Justice Department to be enough to forgo the investigation. As the result of another FTC investigation, Google hasagreed to 20 years of privacy audits.

The U.S. isn't the only country keeping an eye over Google. Courts in Italy have ruled that for search results in Italy, Google has to filter out negative suggested queries in its autocomplete product.

Swiss courts have ruled that Google has to ensure all faces and license plates are blurred out in its Street View product. Google's technology currently catches and blurs out 98%-99% of both already, but the Swiss ruling mandates that Google blur out the remaining by hand if necessary.

In Germany, Google has stopped Street View photography, possibly to avoid burdensome requirements from German courts. Bing is already facing objections from the German government for its plans to operate a similar service.

Bing's growing market share

BingBoth Hitwise and comScore search engine market share numbers are out, and both show Bing gaining.

Hitwise shows that Bing gained 6% in March, for a current share of 14.32%. Bing-powered search (which includes Yahoo) now stands at 30.1%. (Google lost 3% for a share of 64.42%.)

comScore March data shows that Bing's gain from the previous month is much smaller at .3%, for a current share of 13.9% (and a total Bing-powered search share of 29.6%). ComScore's data shows Google with a .3% increase as well for a current share of 65.7%.

Bing's increase may be due, in part, to increased usage of Internet Explorer 9.

Yahoo BOSS relaunches

The original version of Yahoo BOSS was intended to spark innovation in the startup industry and provide a free, white labeled search index that developers could build from. The newest version, however is a fairly substantial change from the original mission, as it includes branding and pricing requirements.

Of course, Yahoo search itself has changed since the original launch. When BOSS was first envisioned, Yahoo had its own search engine and was looking to disrupt the search engine landscape and compete with both Bing and Google. Now, Yahoo uses Bing's search engine, and in fact, this new version of BOSS uses Bing's index as well.

Will applications built on Yahoo BOSS continue to use the platform with these new requirements? I'd be interested in talking to developers who are facing this decision.

Google rolls out its "content farm" algorithm internationally

In late February, Google launched a substantial change to its ranking algorithms that impacted nearly 12% of queries. This change was intended to identify low quality sites, such as those known as content farms and reduce their ranking.

Google has now made some tweaks and has rolled out the change worldwide for all English queries. Sites around the world are already beginning to see the impact.

One tweak is that Google is now taking into account data about which sites searchers block. Google uses hundreds of signals to determine what web pages are the most useful to searchers and this is one example of how user behavior can play into that.

Online reputation management

Nick Bilton recently wrote a piece in the New York Times about the rise of online reputation management. In today's online world, a quick search for a person's name or a company can surface old past discretions, mistakes, or the crazy rantings of someone with a grudge and passable HTML skills.

Mike Loukides followed this up with a Radar post about how he was disturbed by the idea of manipulating search results and using black hat SEO techniques to make negative information disappear.

This topic becomes more important as our lives and culture move online. Justask Rick Santorum.

So what can you do that's not "black hat" if negative information starts appearing about you or your organization? Google recommends that you "proactively publish [positive] information." For example, make sure your business website is optimized well for search and claim ownership of your business listings on the major search engine maps.

Make sure that you've filled out profiles on social media sites, use traditional public relations to raise visibility, and get involved in the conversation. For instance, if negative forum posts appear about your company in search results, reply in those forums with additional information.

[Note: If the "traditional public relations" that you use is to raise visibility of the negative issue a la Rick Santorum, you'll likely only increase the number of search results that appear about the negative issue, as he's perhaps learned.]

If you're able to get a site owner to take down negative information about you, you can request that Google remove that page from its index. And if you have gotten a court order related to unlawful content, you can request Google remove that content from its index as well.



Related:


April 06 2011

On the Internet, you can hire someone to ensure nobody knows you're a dog

Last Sunday, the New York Times published an article about reputation management that I found surprisingly disturbing. Nick Bilton's reporting of this social trend was excellent. People are coming face to face with their pasts in unwanted ways: pictures of ex-spouses upsetting new relationships, stupid things you did in your youth, abuse from trolls, etc. It's a familiar story to anyone who's used the web since its early days. We used to say "On the Internet, nobody knows you're a dog" (following the famous New Yorker cartoon), but it's more correct to say that the Internet is like the proverbial elephant that never forgets. And, more often than not, it knows exactly what kind of a dog you are.

That's old news. We live with it, and the discomfort it causes. What concerns me isn't the problem of damaged reputations, or of reputation management, but of the techniques reputation consultancies are using to undo the damage. What Bilton describes is essentially search engine optimization (SEO), applied to the reputation problem. The tricks used range from the legitimate to distasteful. I don't mind creating websites to propagate "good" images rather than bad ones, but creating lots of "dummy" sites that link to your approved content as a way of gaining PageRank is certainly "black hat" SEO. Not the worst black hat SEO, but certainly something I'm uncomfortable with. And I'm fairly confident that much more serious SEO abuses are taking place in the back room, all in the service of "reputation management."

On Twitter, one of my followers (@mglauber) made the comment "Right to be forgotten?" I agree, there's a "Right to Be Forgotten." And there's a "Right to Outlive Stupid Stuff You Did in the Past." But the solution is not defacing the public record with black hat SEO tricks. For one thing, that's a cosmetic solution, but someone really out to get dirt will have no trouble finding it. More important, I really dislike the casual acceptance of black hat tricks just because they're deployed in a cause to which most of us are sympathetic.

The real solution is doing something that humans are much better at than computers: forgetting. Your new significant other doesn't like Facebook pictures of you and your previous spouse? Get over it. That's how life is lived now. A potential employer is nervous about dumb stuff you did in high school? Get over it. (And if you're in high school or college, think a bit before you post.) It's true that the Internet never forgets, but is anything ever truly forgotten? There are always old wedding pictures in family albums and newspapers, and pranks recalled by old roommates who are more willing to talk than they should be. The Internet hasn't invented the skeleton in the closet, it's only made it easier to take the skeleton out. That doesn't mean that humans can't be mature.



Related:


March 23 2011

In the future we'll be talking, not typing

Search algorithms thus far have relied on links to serve up relevant results, but as research in artificial intelligence (AI), natural language processing (NLP), and input methods continue to progress, search algorithms and techniques likely will adapt.

In the following interview, Stephan Spencer (@sspencer), co-author of "The Art of SEO" and a speaker at the upcoming Web 2.0 Expo, discusses how next-generation advancements will influence search and computing (hint: your keyboard may soon be obsolete).

What role will artificial intelligence play in the future of search?

Stephan SpencerStephan Spencer: I think more and more, it'll be an autonomous intelligence — and I say "autonomous intelligence" instead of "artificial intelligence" because it will no longer be artificial. Eventually, it will be just another life form. You won't be able to tell the difference between AI and a human being.

So, artificial intelligence will become autonomous intelligence, and it will transform the way that the search algorithms determine what is considered relevant and important. A human being can eyeball a web page and say: "This doesn't really look like a quality piece of content. There are things about it that just don't feel right." An AI would be able to make those kinds of determinations with much greater sophistication than a human being. When that happens, I think it will be transformative.

What does the future of search look like to you?

Stephan Spencer: I think we'll be talking to our computers more than typing on them. If you can ask questions and have a conversation with your computer — with the Internet, with Google — that's a much more efficient way of extracting information and learning.

The advent of the Linguistic User Interface (LUI) will be as transformative for humanity as the Graphical User Interface (GUI) was. Remember the days of typing in MS-DOS commands? That was horrible. We're going to think the same about typing on our computers in — who knows — five years' time?


Web 2.0 Expo San Francisco 2011, being held March 28-31, will examine key pieces of the digital economy and the ways you can use important ideas for your own success.

Save 20% on registration with the code WEBSF11RAD

In a "future of search" blog post you mentioned "Utility Fog." What is that?

Stephan Spencer: Utility Fog is theoretical at this point. It's a nanotechnology that will be feasible once we reach the phase of molecular nanotechnology — where nano machines can self-replicate. That changes the game completely.

Nano machines could swarm like bees and take any shape, color, or luminosity. They could, in effect, create a three-dimensional representation of an object, of a person, of an animal — you name it. That shape would be able to respond and react.

Specific to how this would affect search engines and use of the internet, I see it as the next stage: You would have a visual three-dimensional representation of the computer that you can interact with.




Related:


January 31 2011

Four short links: 31 January 2011

  1. BBC Web Cuts Show Wider Disconnect (The Guardian) -- I forget that most people still think of the web as a secondary add-on to the traditional way of doing things rather than as the new way. Interesting article which brings home the point in the context of the BBC, but you can tell the same story in almost any business.
  2. 40p Off a Latte (Chris Heathcote) -- One of the bits I enjoyed the most was unpacking the old ubiquitous computing cliche of your phone vibrating with a coupon off a latte when walking past a Starbucks. This whole presentation is brilliant. I'm still zinging off how data can displace actions in time and space: what you buy today on Amazon will trigger a recommendation later for someone else.
  3. Long-Form Reporting Finds Commercial Hope in E-Books -- ProPublica and New York Times have launched long-form reporting in Kindle Singles, Amazon's format for 5k-30k word pieces. On Thursday, he told me his job involved asking the question, “How do you monetize the content when it is not news anymore?” Repackaging and updating the paper’s coverage of specific topics is a common answer.
  4. Why You Should Never Search for Free Wordpress Themes in Google or Anywhere Else -- short answer: free themes are full of SEO rubbish or worse. Every hit on your site boosts someone else's penis pills site, and runs the risk that search engines will decide your site is itself spam.

December 06 2010

Getting Google to notice your ebook

TOC 2011Now that Google has launched its ebook store, it's time for book publishers and authors to learn a little SEO (search engine optimization).

If you sell shoes or cellphones online, you already know SEO, or you're out of business.

Until now publishers have played in a different pool -- more Amazon/Barnes & Noble than Google -- but Google eBookstore suddenly gives booksellers a reason to at least wade into SEO.

Because now if Google can find your book, it can sell your book, a few different ways: via its online eBookstore, or in apps for the Apple and Android platforms, or through one of its independent bookstore partners. (See also: How SEO relates to the Google eBookstore.)

But getting noticed by Google is not easy.

I discovered this when I tried to shelve my experimental ebook, "Ebook Publishers to Watch: 2011," cover facing out, on the Internet. I put it on Amazon, I uploaded it to Scribd. I created a companion website, with a shopping cart. I sold a few dozen copies, pretty much right off the bat without doing much of anything. Not bad.

But I couldn't help noticing that as far as Google was concerned I was barely visible.

No, worse than that. My ebook wasn't even the top result on Google when I entered the exact book title into the Google search bar. It was beaten out by a mention of the book on TeleRead. That was sobering. I started thinking: Was I neglecting some obvious SEO techniques? Should I be choosing keywords? Optimizing chapter titles? Posting an ebook sitemap to Google and Bing? Are there emerging best practices for an ebook author?

Google's book view

The good news is that even before the Google eboookstore, Google takes books very seriously. I heard this directly from Sergey Brin himself, in a conversation we had after a small press get-together last year. He told me that Google co-founder Larry Page has been interested in scanning and indexing books since the company's early days. We also talked briefly about his footwear, although I digress.

More recently when I spoke to Matthew Gray, lead software engineer of Google Books Search Quality, he reiterated that visibility of ebooks and books is a priority for Google

"Our goal is making all the world's information universally accessible and useful, and we believe that a lot of the world's information is in books," he said. "So it's important to us to make that information available. If you need to know something about a disease, or a travel destination, there's good chance the best information is in a book."

For publishers who are members of Google's free Google Books Partner Program -- 35,000 publishers have joined since it launched in 2005 -- every book's content is indexed and made available in the search engine's universal, blended search results. An Internet searcher can usually scroll through about 20 pages of text around the search result, depending on the publisher's preference. Books that are out of copyright are 100 percent available to Internet searchers. For books with hazy copyrights -- the books that are covered by the proposed Google book settlement -- Google serves up much shorter snippets of text around the search result.

In all three cases, books are "discoverable" on the search service. For example, during the financial meltdown of late 2008, Internet users searching for "economic crash" discovered "The Great Crash of 1929" by John Kenneth Galbraith, first published in 1955, one of thousands of books on the backlist of Boston publisher Houghton Mifflin Harcourt. Normally, Galbraith's book would have faded from public attention, but the book's contents were a good match for the search query, and it benefited from the visibility on Google.





Ebook tools and best practices will be examined at the next Tools of Change for Publishing conference (Feb. 14-16, 2011). Save 15% on registration with the code TOC11RAD.





Metadata and market signals


But what about new books and ebooks? How does Google determine which new titles, and the more than 15 million books that have been scanned, float to the top of its search results pages: in the web search box and in the ebookstore.

The challenge, for Gray and other Google engineers on the Books project, is that the best known component of Google's algorithm for determining the the value of a web resource -- the number of links to it by others -- does not apply to books and ebooks. Although it is possible to link to a selection in certain books on Google Books (here's a hyperlink into the aforementioned Galbraith title) people don't generally create links to the contents of a book or ebook. So linking is not a reliable indicator of quality.

One strategy that Google employs is to tap into the book industry's "rich tradition of metadata." Gray mentioned that Google considers author blurbs, subtitles, synopses, reviews, and author biographies to be valuable sources of information about a book or ebook.

Today, much of this metadata travels with books in an ONIX file. ONIX, which stands for ONline Information eXchange, is the XML-based standard that contains more than 200 data elements: from author name and title, to book reviews, author photos, and excerpts. Google allows its book partners to provide ONIX feeds, Gray said.

Google also looks at what Gray referred to as "market signals:" how often a book has been reprinted, web searches, recent book sales, the number of libraries that hold the book, etc.

Google's book search algorithm incorporates more than 100 "signals," and those signals change constantly. The goal is that all these signals add up to a single, simple result: "That the best way to get a book ranked high on Google Books is to write a really good book," Gray said

And when the signals point to an obvious target of a book search, Google Books now confidently displays one extra-large result: a super-sized book cover of the title. For example, if Google thinks your search terms indicate that you're looking for Malcolm Gladwell's "The Tipping Point," it will feature that at the top of the page rather than any of the half-dozen other books with the same, or very similar, title.

Google Book Search result
Google uses signals to push obvious results to the top of its listings. In this case, Google assumes a query for "tipping point" means the searcher is looking for Malcolm Gladwell's book rather than other books with the same title.



3 best practices for getting Google to notice your book


Despite all this signal sniffing, Gray said there are a few practices that authors and publishers can follow to increase the likelihood that a book or ebook comes to Google's attention.

1. Use descriptive titles and chapter headings -- Gray said an approach that favors "cleanliness of information" will make it easy for Google to find relevant content in a book and serve it up to the inquiring mind on the other side of the Google search box.

For example, if a book on the Internet has a chapter on the history of the web, it would be much better to use the title "History of the Web" than simply, "History."

"We're going to do the best we can," he said, "but more complete chapters titles will help us out."

2. Create quality content outside the book -- The content you create around a book can also make a difference. As an example Gray cited the bestseller "Freakonomics," by Stephen Dubner and Steven Levitt, pointing out that the first result from a search for the title is not the book, but the New York Times "Freakonomics Blog." That comes ahead of the book's web page, and the Amazon listing.

Presumably the frequency of the blog updates, the authority of the New York Times, and the number of inbound links to the column on the Times' website boost the newspaper site's ranking over the original book.

Yet the column's prominence also helps the book's ranking on Google. As more book publishers use the web to augment the content between the covers, this kind of synergy is likely to become more common. "This blurring of the line between books and other content is something that I expect to happen more and more," Gray said.

3. Book covers matter -- One significant piece of metadata that Google has discovered, Gray noted with wry irony, is the book cover.

"A book cover is actually very rich metadata," he said. "People associate a cover style with a particular author, or a particular series, and they respond to that. They recognize the cover and say, 'That's what I'm looking for'."

Gray's advice to book and ebook publishers: pay attention to the covers. "We have observed," he said, "that people actually do judge a book by its cover."



Related:




November 01 2010

The economics of gaining attention

A fascinating article at the Daily Beast chronicled an attempt to reverse engineer the Facebook social news feed. It sought to answer questions about how and who Facebook chooses to display on your news feed page.

Not surprisingly, Facebook makes assumptions based on behavior to ensure that it propagates people and information with the highest likelihood of gaining attention or engagement. For example, individuals whose profiles are “stalked” by others show up disproportionately in news feeds because Facebook assumes they must be stalked for good reason. They must be interesting.

Facebook screen?

As Facebook becomes an increasingly vital part of how businesses connect with customers, the algorithms determining who gets attention will become increasingly important. They shape business communications and behavior.

We have now a long history of content being written to accommodate the rules of search engines -- particularly Google. We research keywords and then ensure they are placed at the front of our headlines and titles. We reorganize content into staccato bursts of bullet points and subtitles, and so on. Optimization of this kind now dominates all professional content production on the web and shapes our experience as consumers of that content.

As our social, economic and political lives are increasingly mediated through a few consolidated technologies such as Facebook and Google, software exerts a profound influence on the way we engage with one another. The natural, sociological secrets of how to gain attention are being codified. In turn, this creates a normative effect on how we behave. We conform to the rules embedded in the code.

We have always written lead lines with an eye to attracting readers, but there are two aspects here that are new:

  1. The widespread incorporation of scientific rigor into the exercise. For example, the Huffington Post does A/B tests of its own headlines to favor the winning headline.
  2. The uniformity of the resulting norms. We are conforming to a few dominant algorithms.

Gaining attention in this world becomes as much about the science of standing out as the art of being outstanding. And every link forged is a form of currency exchange where the market favors the heavyweights.

While I don't doubt that we will see a continued wellspring of creativity emerge from an open web, these algorithms themselves represent a bias toward those who decipher the code. Doing so requires resources that favor the large over the small, and the organization over the individual. There is nothing new to this progression, but it does run counter to the heroic individual archetype (the lone blogger, the basement video show broadcast around the world, etc.) that the web often celebrates as its own unique progeny.



Related:


June 13 2010

VernissageTV PDF-magazine No. 13: With Art Basel Week Map and Calendar

Out now: VernissageTV PDF-magazine No. 13, June 2010 with: Larissa Fassler’s “Kotti”, Franz West’s “The Ego and the Id”, Art Cologne 2010, Monica Bonvicini and Krivet at Berlin Gallery Weekend, SEO (studio visit), the Basquiat retrospective at Fondation Beyeler, the design of Alessi, Hong Kong International Art Fair, and: a calendar and map of places and events during Art Basel 2010.

Click image or this link to download the magazine (13 MB).


May 18 2010

Studio Visit: SEO

SEO was born as Soo-Kyoung Seo in Gwangju / Korea in 1977. After attending the Gwangju School of Arts and studies at Cho-sun University in Gwangju, she moved to Berlin, Germany in 2001 to study with Prof. Georg Baselitz at Universität der Künste UdK. From 2003 to 2004 she was Masterstudent of Georg Baselitz. SEO received numerous awards and participated in the biennials in Beijing, Liverpool, Shanghai, Fusing, Prague, and Gwangju.

In her work, SEO unites Asian and European elements. Her artistic technique is quite unique: She uses thousands of paper strips in an endless number of colors that she glues to the canvas to compose her works. VernissageTV met with SEO in her studio in Berlin. In the conversation with Dr. Bettina Krogemann, SEO talks about her decision to move to Berlin, the importance of Baselitz for her artistic development, the subject matter of her works, and the creation process. The conversation is in German language, our upcoming PDF magazine will contain a summary of the interview.

The above video is an excerpt. The full-length version is available after the jump.

Studio Visit: SEO. Berlin / Germany, May 2, 2010.

> Right-click (Mac: ctrl-click) this link to download Quicktime video file.

Full-lenth version:


January 07 2010

December 24 2009

Four short links: 24 December 2009

  1. Jonathan Zittrain on "Minds for Sale" -- video of a presentation he gave at the Computer History Museum about crowdsourcing. In the words of one attendee, Zittrain focuses on the potential alienation and opportunities for abuse that can arise with the growth of distributed online production. He also contemplates the thin line that separates exploitation from volunteering in the context of online communities and collaboration. Video embedded below.
  2. Anatomy of a Bad Search Result -- Physicists tell us that the 2nd law of Thermodynamics predicts that eventually everything in the universe will be the same temperature, the way a hot bath in a cold room ends up being a lukewarm bath in a lukewarm room. The web is entering its own heat death as SEO scum build fake sites with stolen content from elsewhere on the web. If this continues, we won't be able to find good content for all the bullshit. The key is to have enough dishwaster-related text to look like it’s a blog about dishwashers, while also having enough text diversity to avoid being detected by Google as duplicative or automatically generated content. So who created this fake blog? It could have been Consumersearch, or a “black hat” SEO consultant, or someone in an affiliate program that Consumersearch doesn’t even know. I’m not trying to imply that Consumersearch did anything wrong. The problem is systematic. When you have a multibillion dollar economy built around keywords and links, the ultimate “products” optimize for just that: keywords and links. The incentive to create quality content diminishes.
  3. Magplus -- gorgeous prototyping for how magazines might work on new handheld devices.
  4. Glasgow's Joking Computer -- The Glasgow Science Centre in Scotland is exhibiting a computer that makes up jokes using its database of simple language rules and a large vocabulary. It's doing better than most 8 year old children. In fact, if we were perfectly honest, most adults can't pun to save themselves. Q: What do you call a shout with a window? A: A computer scream. (via Physorg News)

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl