It looks like you're new here. If you want to get involved, click one of these buttons!
Tonight on GeekNights, we cover the high level of performance profiling. It's a complex problem that requires an understanding of programming and profilers, but also hardware and statistics. In the news, Orkut is finally dying the true death, right alongside freshmeat.net. ConnectiCon is happening NEXT WEEKEND!
Download MP3
Comments
The occulus rift is OK, but you get sick after 30 min.
If your 'hello world' is slow, maybe programming isn't for you. Also performance profiling is technical as fuck.
The correct answer is not to use Java. (Exceptions to this rule are... limited)
Not functional programing; I think you mean 'back-end' vs front end. (functional programming is totes cool, but kinda has some efficiency issues)
You can optimize code for efficiency and for usability. These are different things. Super efficient code is often less readable. The first time you write something its always gonna look like butt. To be fair, hacks ship software.
Dude, quake using bsp maps and stuff! I think Carmack or someone has a paper on that. Basically, always pick a shotty because designers have a vested interest in keeping things in closed quarters for performance reasons.
Don't get me started on data in networks.I had game project take hours to pull.
I really don't know systems and networks that well, or real low level programming. So call my bullshit esp. if you can teach me something.
Eww... Garbage collection... (If you optimize for speed; makes a langue great to hack things up in)
Did anyone else on the forum follow this podcast episode?
I do agree that hacks ship software and often the first pass of coding often looks like butt. Well, my job is as a low level systems programmer, so it's kind of my area of expertise. Garbage collection isn't necessarily that bad, although I prefer automatic reference counting as implemented in more recent versions of Objective C, Swift, and C++ if you use something like std::shared_ptr or boost::shared_ptr, solely because of the deterministic release of resources these features offer. Still, a good garbage collector isn't necessarily a bad thing. The main issue with it is that you can't always know for sure when it's going to kick in, or even when an individual resource is going to be released. When you have folks like Ken Thompson (one of the original Unix creators, also creator of the B programming language that was C's direct ancestor) and Rob Pike (one of the guys behind the Plan 9 resource operating system and another Bell Labs wizard who worked with folks like Thompson, Ritchie, Kernighan for most of his career) creating a new language (Go) that features garbage collection, it goes to show that it's not necessarily a bad thing when done right. FWIW, garbage collection itself is nothing new. It goes back at least as far as the 70's with Smalltalk, and I believe LISP may have done it as far back as the 80's. It's mostly a solved problem now. I'm usually a day or two behind on the podcast episodes, but I do still listen to them.
I was under the impression Java and C# did reference counting but apparently Java doesn't do it at all (or not to the same effect). Then I thought non root threads might be what did reference counting then I realized I was thinking of lock striping.
EDIT: I just learned more about Java's weird memory management model.
I tried out Google's GO and that language feels really good to use compared to Java. Could just be me. I also had a read of Apple's Swift and found it difficult to follow, I chalked this up to me being uneducated.
I was just trying to work out how to pick up good programming habits but with this type of approach is likely to be churning out the the 'hacks' others are speaking of. Yes, I've been listening since 2006.
Like Scott wondered in the newsletter, there are listeners who feel that they already know their podcast hosts (when they finally get to meet them at PAX Aus).
If you are writing something where every ounce of performance matters, like code for a high end game, space rocket, etc. then you can sacrifice readability.
If you are a great programmer, you will be able to optimize for performance without losing readability.
Seriously, I remember going from 16 bit to 32 bit and how liberating it was. My code could be messy and inefficient and still work!
Then you get other fun stuff like Duff's Device.
Ideally, you'd want good comments and easy to follow code. If it's impossible to make easy to follow code performant enough for your application, then yeah, you better have it well-commented. Still, even comments aren't always enough to fully understand what a piece of code does. Bah, the 80-20 rule was in effect even back then. Also, there is a difference in performance vs. dealing with resource constraints. Sometimes the fastest algorithm may require more resources (RAM, CPU, etc.) then you have to offer, and in that case you need to fall back on a slower algorithm that better fits in your available resources.
The programming language also looks like a dialect of Malboge, but that's a different issue. :P
Typically when I do write code (mostly php these days) I tend to write things twice. The first pass is very clear and very inefficient while later passes are rewritten to increase performance and add comments to explain things that are no longer obvious due to performance enhancements.
Since I code for pleasure I am not under any pressure to make my code look nice but it helps me if I have to go back months later and make changes.
Bad: using $Array[] in my program.
Good: using $ArrayOfLicenseData[] in my program.
Now if the comment was something like this:
/*
* The following line takes the Pre Optimized Account Information from 2003
* and subtracts the Federated Universal 8-bit Access Object InterFace value
* from it. It then runs it through the HP-98 calculator emulation routine and
* then performs blah blah mathematical transfers on the result.
*/
(Yeah, I didn't comment all the individual opcodes, but just imagine similarly detailed comments for them. I also struggled to come up with ways to describe the variable names based on what they look like). It would actually be decent, even when using such a horrible language.
I would add C and C++ but I'm learning it next semester.
I'd argue that a programming language that by default (that is without using preprocessor tricks such as those used in the Obfuscated C Code Contest) allows you to easily write completely illegible code is a flawed or joke language. Perl's problem here is that the language supports so many syntactical shortcuts and context-sensitive sigils and operators that it does push the bounds of flawed due to allowing illegible code (and again, that's ignoring the regex syntax). C isn't quite so bad since the language isn't so dependent on syntactical shortcuts, context-sensitive sigils, and so on. Obfuscating C code actually requires some effort (via the preprocessor, "clever" use of typdefs and function pointers, etc.) if you want to do things more complicated than using minimal whitespace and single-character variable and function names. Obfuscating Perl seems to be standard operating procedure due to the nature of the language.
Best for low-level systems stuff where you don't want/need assembly, including operating system kernels: C
Best for user-space/slightly higher-level systems/application stuff where you need C-like native performance but want more safety than C offers and/or object orientation: C++
Best for scientific number crunching: Fortran
Best for web stuff: to be fair, any of the common ones such as Python, Ruby, PHP, are probably all roughly equally good. PHP does have some issues, mostly due to the syntax and libraries rather than the language's capabilities/performance.
Best for "Swiss Army Knife" scripting: Take your pick between Python, Perl, and Ruby, with Python and Ruby having an advantage in readability over Perl.
Best for long-running processes where you want decent performance, though not necessarily the best performance, and relatively safe code. Also good if you need cross-platform support: Java
Best for user-driven Windows apps where the user, and not the CPU or I/O, is the primary performance bottleneck: C#
These are just my own personal opinions and there certainly are overlaps between many of these categories (C# and Java in particular overlap quite a bit). These also don't cover all the other random situations where one language may be better than another (Lua is quite excellent if you need an embedded scripting engine, such as in games, for instance). They also don't cover trade-offs that people may make. For example, Fortran may be the best if you're doing some heavy duty nuclear physics simulations on massive datasets, but if you just want to bang out a few calculations on smaller datasets, tools such as MATLAB or Python with a library such as Scientific Python may be better as whatever time you lose due to the calculations taking longer is offset by the time saved in developing the code to perform the calculations.
A lot of language decisions pretty much comes down to time spent developing/maintaining vs. time spent executing. Languages that are easier to develop in and/or maintain tend to be slower executing, but if the execution time is still "fast enough" for your purposes, then you're usually better off saving development time instead of execution time. In other words:
If development_time < execution_time, probably best to go with something like C, C++, Fortran, etc.
If execution_time < development_time, probably best to use some sort of scripting language like Python, Ruby, Perl, MATLAB, etc.
Java and C# kind of straddle the line between these two extremes.
Essentially, the best way that currently exists to determine if a particular language is best for any particular task is to look around and see what languages are used for similar tasks and follow the lead. Re-evaluate every few years as new languages crop up to see if things have changed since the last time.