Tonight on GeekNights we discuss some of the interesting developments in the world of HTML version 5. In the news, Michael Jackson's death caused some troubles on the Internet, and the Pirate Bay fires a shot across YouTube's bow.
I heard Scott saying that the Pirate Bay will keep fighting, just minutes after reading The Pirate Bay Will Close Its Tracker and Remove Torrents. We will see if, what they are planning, will take over or if another tracker is going to get popular or if decentralized bittorrenting is going to get more use.
Edit: After some reading, selling was probably the best option and having the money in a fund for new Internet projects is genius. The Pirate bay may go down but The Pirate bay guys could go stronger than ever.
Light (or any other electric signal) travels one foot in one nanosecond. If you are syncing networked computers to nanosecond accuracy you better not switch cables afterwards ;-)-
Regarding science and need for precision time stamping: Keeping the foot/nanosecond in mind think about Atlas, a detector the size of the main concourse in grand central station (a little shorter but height width is about correct) with tens of thousands of modules, tens of millions of readout channels and miles upon miles of cables. Collisions at the center happen once every 25 nanoseconds. The produced "debris" will have traveled barely through half of the detector when the next collision happens. The readout of the detector elements has to happen in the time between "debris" going past each individual detector layer, and all of that asynchronously acquired data has to be reconstructed into a single event.
Oh yeah, usually there are more than one interactions per collision, so you get overlapping events that have to be disentangled.
I have no doubt about the need for such precise timing in a scientific application. I do have doubts about the need for such a high degree of precision in an enterprise business setting. Surely great precision is needed in real time applications, but only up to a certain point before it becomes excessive. I don't know much about the specific technology that Rym saw at the trade show. Assuming the technology works as advertised, I would wager that it is not precise enough enough for scientific applications, but is excessively precise for business applications.
I got kinda a geeky thrill going to an HTML 5 video test page and watching videos without using flash. I can't wait until this comes into the mainstream; I'm pretty sure YouTube doesn't like being beholden to Adobe any more than any company likes being beholden to any other company.
I heard Scott saying that the Pirate Bay will keep fighting, just minutes after readingThe Pirate Bay Will Close Its Tracker and Remove Torrents. We will see if, what they are planning, will take over or if another tracker is going to get popular or if decentralized bittorrenting is going to get more use.
That's like hilarious irony, man.
At least with them selling, the pirate spirit will hopefully never truly leave the internets.
Using internet browsers for alternative applications. SeeThe Pythics Project.
Very interesting!
One thing that is always tricky with XHTML is the mime type from the web servers. See, what most people do with XHTML is they write their pages in it, but the web server tells the web browser that it is sending regular old HTML. Thus, the XHTML gets handled as HTML, and you lose all the XHTML-ness. If you configure your web server to tell the web browser that xhtml is coming, then it works in an awesome XHTML way. The browser might even refuse to render your page if your XHTML isn't perfectly well-formed and validated. However, shitty browsers like IE will shit themselves.
I think XHTML still has its place for when someone needs to make a user interface that is used by humans through browsers, but is also parsed by machines. However, I think that for most web sites that target just humans, HTML, and therefore HTML5, is the way to go. The fact that people are pushing for HTML5 rather than XHTML2 I think shows that XHTML has failed to take over.
One, as you were talking about the fly killing/eating clock I heard a fly buzz around my ear. I thought you guys were playing in cheesy sound effects, but no, it was a real fly. I was quite confused.
Two, I have to share the coolest/saddest ever flash intro for a website: http://www.iccm-1.org/. It is so over the top I almost think it must be a parody. Turn on your sound and keep watching.
It seems like Firefox (at least by default) can't use MP3 files in the <audio> tag, and I haven't found a way to make it do so.
And Vis-a-vis the codec issue, wouldn't it be possible to have Firefox make a call out to FFDshow or some other external filter, the way MPC does on Windows?
Your point? XHTML 2.0 is in draft, and HTML 5 has an XML specification, thereby also being XHTML. So what XHTML is 'failing' to 'take over'?
Lucky you on news releases, I had not yet learned of said news. Either way, all that means is that a compromise was made, since HTML 5 started by "BAWWWWWWWWW, we don't like you cleaning up our mess with your XHTML, we'll make something new." They even called themselves the WHAT (now) Working Group.
I see useful aspects of both approaches. XML is good for cleaning up the horrible semantic mess, while there is a real need for some of the new tags being added in HTML5. So just put them together!
I personally think that something missing from both standards is an internal way of taking parts of your page and putting them in another file, so things like navbars can be imported into all the pages on the site like CSS without having to go to an external language like PHP. It just seems like such a basic functionality, and it would improve speed since you only have to fetch the repeated elements once. I find the current method annoying since not every server has PHP and having to go through the external language for this just seems dumb. Yes, I know about the object tag, but since no one actually implements it, it may as well not exist, and besides it's a bit different than what I am talking about.
This all assumes that the new tags actually get implemented in a usable way (unlike the aforementioned object, which would have served a similar if less specialized function) which is starting to look unlikely. In that case we all fall back on HTML4 because it works.
<xml /><has /><too /><many /><elements /> Also, what servers don't run PHP (or ruby/rails, or python/django, or node.js, or another server-side language)?
Comments
Edit: After some reading, selling was probably the best option and having the money in a fund for new Internet projects is genius. The Pirate bay may go down but The Pirate bay guys could go stronger than ever.
Regarding science and need for precision time stamping:
Keeping the foot/nanosecond in mind think about Atlas, a detector the size of the main concourse in grand central station (a little shorter but height width is about correct) with tens of thousands of modules, tens of millions of readout channels and miles upon miles of cables. Collisions at the center happen once every 25 nanoseconds. The produced "debris" will have traveled barely through half of the detector when the next collision happens. The readout of the detector elements has to happen in the time between "debris" going past each individual detector layer, and all of that asynchronously acquired data has to be reconstructed into a single event.
Oh yeah, usually there are more than one interactions per collision, so you get overlapping events that have to be disentangled.
At least with them selling, the pirate spirit will hopefully never truly leave the internets.
One thing that is always tricky with XHTML is the mime type from the web servers. See, what most people do with XHTML is they write their pages in it, but the web server tells the web browser that it is sending regular old HTML. Thus, the XHTML gets handled as HTML, and you lose all the XHTML-ness. If you configure your web server to tell the web browser that xhtml is coming, then it works in an awesome XHTML way. The browser might even refuse to render your page if your XHTML isn't perfectly well-formed and validated. However, shitty browsers like IE will shit themselves.
I think XHTML still has its place for when someone needs to make a user interface that is used by humans through browsers, but is also parsed by machines. However, I think that for most web sites that target just humans, HTML, and therefore HTML5, is the way to go. The fact that people are pushing for HTML5 rather than XHTML2 I think shows that XHTML has failed to take over.
One, as you were talking about the fly killing/eating clock I heard a fly buzz around my ear. I thought you guys were playing in cheesy sound effects, but no, it was a real fly. I was quite confused.
Two, I have to share the coolest/saddest ever flash intro for a website: http://www.iccm-1.org/. It is so over the top I almost think it must be a parody. Turn on your sound and keep watching.
And Vis-a-vis the codec issue, wouldn't it be possible to have Firefox make a call out to FFDshow or some other external filter, the way MPC does on Windows?
Lucky you on news releases, I had not yet learned of said news. Either way, all that means is that a compromise was made, since HTML 5 started by "BAWWWWWWWWW, we don't like you cleaning up our mess with your XHTML, we'll make something new." They even called themselves the WHAT (now) Working Group.
I personally think that something missing from both standards is an internal way of taking parts of your page and putting them in another file, so things like navbars can be imported into all the pages on the site like CSS without having to go to an external language like PHP. It just seems like such a basic functionality, and it would improve speed since you only have to fetch the repeated elements once. I find the current method annoying since not every server has PHP and having to go through the external language for this just seems dumb. Yes, I know about the object tag, but since no one actually implements it, it may as well not exist, and besides it's a bit different than what I am talking about.
This all assumes that the new tags actually get implemented in a usable way (unlike the aforementioned object, which would have served a similar if less specialized function) which is starting to look unlikely. In that case we all fall back on HTML4 because it works.
<xml /><has /><too /><many /><elements />
Also, what servers don't run PHP (or ruby/rails, or python/django, or node.js, or another server-side language)?