Another strange tidbit: I tried subscribing in Google Reader via a different URL (feedproxy.google.com/GeekNights) and this time Google Reader loaded all of the posts once only in the correct order, since it considered this to be a "new" feed.
Of course, this one only goes as far back as the FeedBurner goes (20110228), but it's nicer due to being in the correct order and not having duplicates.
That doesn't explain where the duplicates with the weird URLs came from, though.
Looking at that, I can see that the duplicate entries had incorrect URLs, for example: http://example.com/geeknights/20111115/jared-sorensen/ since the URL is also the ID, that's the reason they ended up being duplicates.
These duplications are, in fact, Scott's fault for changing the IDs of the items, causing Google to recognise each of these as a distinct post item than the same one three times.
From what I can tell, this issue would have been avoided by doing one of two things: 1) Not changing the URLs 2) Using a guid like <guid isPermaLink="false">GeekNights 20090331</guid> and not changing it.
Obviously. But if Google is using the URL in the absense of a guid, they should realize that the two different URLs are equivalent in the presence of a permanent redirect.
You should be using guid's. It's the way to stop assuming things of other services. I'm surprised that, as a long-time podcaster, you don't use them at all.
1) Your redirects are not working for http://frontrowcrew.com/?p=790 and all other URLs of this category - you should fix this. 2) Your feed does in fact have guids, but as far as I can tell you're just using the URLs as guids. For example, your latest item has <guid isPermaLink="false">http://frontrowcrew.com/geeknights/20120604/government-hackers/ </guid> .
Either way, not properly using guids is definitely your mistake, Scott. From what I can tell, FeedBurner doesn't do anything strange to them.
The remaining question is where the recent duplicates from May 29th, such as http://example.com/geeknights/20111115/jared-sorensen/ came from. My guess is that Scott updated the feed while in the middle of fixing everything up, causing a bunch of incorrect data to appear on FeedBurner, which was then picked up by Google before Scott had finished fixing everything. Of course, this issue wouldn't have happened with proper use of guids.
I didn't even know guid was a thing in RSS. If I look at my code, I didn't even specify any guid at all. I put in the bare minimum required fields for valid RSS. Django is adding those guid fields automatically and setting them to the url all on its own according to this documentation.
All podcast and RSS software will automatically add a guid, including wordpress, etc. Using URL fields is normally a good way to tell one bit of unique content from another, but not if your are changing redirects! It's fine to use non-valid URL's or redirects as guid's, even if you update the redirect to a different file. The idea is to keep just one for each new bit of content.
All podcast and RSS software will automatically add a guid, including wordpress, etc. Using URL fields is normally a good way to tell one bit of unique content from another, but not if your are changing redirects! It's fine to use non-valid URL's or redirects as guid's, even if you update the redirect to a different file. The idea is to keep just one for each new bit of content.
I know what a guid is. I just didn't know that RSS used them, or that they mattered. Apparently only Google Reader had a problem with this, and it also isn't respecting the published dates, which are correct. I really don't care that much about something that affects maybe three people in the world.
1) It's not just Google Reader that does this - see Pegu's post on the previous page (although that could have been GR with a different skin) 2) There are 221 people who subscribe to your podcast feed via Google Reader; there's probably around 50 more unique subscribers to the feeds for individual days.
I agree that Google's treatment of published dates is a bit stupid, but it wouldn't have been a problem for anyone if you hadn't temporarily a bunch of entries with incorrect guid's and URLs (namely, stuff like http://example.com/geeknights/20111115/jared-sorensen/).
1) It's not just Google Reader that does this - see Pegu's post on the previous page. 2) There are 221 people who subscribe to your podcast feed via Google reader; there's probably around 50 more unique subscribers to the feeds for individual days.
Does it work? Can you listen to the show? All is well.
1) It's not just Google Reader that does this - see Pegu's post on the previous page. 2) There are 221 people who subscribe to your podcast feed via Google reader; there's probably around 50 more unique subscribers to the feeds for individual days.
Does it work? Can you listen to the show? All is well.
1) It's not just Google Reader that does this - see Pegu's post on the previous page. 2) There are 221 people who subscribe to your podcast feed via Google reader; there's probably around 50 more unique subscribers to the feeds for individual days.
Does it work? Can you listen to the show? All is well.
So interface and presentation no longer matter?
That depends heavily on the most important factor - Does it mean extra work for Scott if it does?
On a positive note, the Google search on the site proper, not the forum, has performed quite well any time I've needed to find a show for a reference or link.
Comments
I tried subscribing in Google Reader via a different URL (feedproxy.google.com/GeekNights) and this time Google Reader loaded all of the posts once only in the correct order, since it considered this to be a "new" feed.
Of course, this one only goes as far back as the FeedBurner goes (20110228), but it's nicer due to being in the correct order and not having duplicates.
That doesn't explain where the duplicates with the weird URLs came from, though.
http://www.google.com/reader/atom/feed/http://feeds.feedburner.com/GeekNights?n=1000
Looking at that, I can see that the duplicate entries had incorrect URLs, for example:
http://example.com/geeknights/20111115/jared-sorensen/
since the URL is also the ID, that's the reason they ended up being duplicates.
Also, from looking at that recorded feed, I'll note that this kind of duplication has happened before, and it happened every time you changed the URL format for your episodes. Notably, the episode "GeekNights 20090331 - Game Marketing Exaggerations" actually occurs three times, with three different IDs (which were also the URLs):
http://frontrowcrew.com/geeknights/20090331/game-marketing-exaggerations/
http://www.frontrowcrew.com/episodes/2009/03/31/game-marketing-exaggerations/
http://www.frontrowcrew.com/?p=790
These duplications are, in fact, Scott's fault for changing the IDs of the items, causing Google to recognise each of these as a distinct post item than the same one three times.
1) Not changing the URLs
2) Using a guid like
<guid isPermaLink="false">GeekNights 20090331</guid>
and not changing it.In summary: blame Scott.
2) Your feed does in fact have guids, but as far as I can tell you're just using the URLs as guids.
For example, your latest item has
<guid isPermaLink="false">http://frontrowcrew.com/geeknights/20120604/government-hackers/ </guid>
.Either way, not properly using guids is definitely your mistake, Scott. From what I can tell, FeedBurner doesn't do anything strange to them.
The remaining question is where the recent duplicates from May 29th, such as
http://example.com/geeknights/20111115/jared-sorensen/
came from.
My guess is that Scott updated the feed while in the middle of fixing everything up, causing a bunch of incorrect data to appear on FeedBurner, which was then picked up by Google before Scott had finished fixing everything. Of course, this issue wouldn't have happened with proper use of guids.
https://docs.djangoproject.com/en/1.4/ref/contrib/syndication/
You should probably still fix the redirects for stuff like http://frontrowcrew.com/?p=790 as well.
2) There are 221 people who subscribe to your podcast feed via Google Reader; there's probably around 50 more unique subscribers to the feeds for individual days.
I agree that Google's treatment of published dates is a bit stupid, but it wouldn't have been a problem for anyone if you hadn't temporarily a bunch of entries with incorrect guid's and URLs (namely, stuff like http://example.com/geeknights/20111115/jared-sorensen/).
However, messing up the feed with a bunch of invalid and/or duplicate entries is an unnecessary annoyance that you should prevent in the future.