It's always interesting seeing these sorts of discussions. On HN, every now and then you uncover a conversation about what's financially reasonable, between the guy making $450k/year and the guy making $45k/year.
Many people don't want to spend $250/mo on gas. Whether they do or not is separate.
I had a Nissan Pathfinder once and it was okay when gas was $2-2.50/gal (16 gallon tank, $40/wk, $150/month). Once it got to $4.00/gal damn right I found alternatives to driving that Pathfinder. The 20 minute walk to the store in the hot, Florida sun didn't seem so bad anymore.
I have many friends who bought SUVs (for the "utility") and commuted way out from the exurbs. They had no choice but to eat the cost of $100 fillups while looking for cheaper alternatives.
Yeah, not sure what that guy is on about. Both prices are lower than the Leaf's nominal price, and I've heard you could get ~$10k worth of tax credits or something with the Leaf. Unfortunately I don't remember the details, but I met a guy who leased a Leaf who told me about all this. He got a pretty sweet deal on the lease too.
When I taught Python to my kid and a few other kids that live around my house a coupl of years ago, Guido was gracious enough to sign these head-shots for me to give to each kids as their graduation present.
I'm not sure what to make of that. It feels both nice and oddly narcissistic at the same time. I'm not that familiar with Guido though, maybe I just don't understand his style.
Up until now Gitlab has been very unstable. 5.X promises to be much better on that front because they are replacing gitolite. The Gitlab <-> gitolite synchronization is the cause of most of my grief.
I haven't been a 'coder' since 1994 yet I'm defiantly "technical". The teams I lead, AFAIK, don't question my contributions because I don't code. Nor are my designs less valid.
In fact, my years of involvement on the operations side of the house sometimes make my choice 'technical' choices more informed than what straight coders cook up.
Obviously it cannot be faster than a local DNS cache/server (for the RTT). What can be significantly faster is forwarding queries to Google Public DNS instead of your ISP. At least in my experience, ISPs often have slow and overloaded servers which are not well maintained. Recursively resolving completely on your own is often even slower than either of those two choices.
I have been running unbound for a month, and there is no noticeable difference in resolving time. This article provides a good read: http://linuxmafia.com/~rick/googledns.html
45ms beats the hell out of the hundreds, or thousands, that crappy Irish ISPs deliver from their own 'local' name servers. When they're working at all.
I have TONS of VHS tapes that I have been slowly digitizing over the pass 25 years. The issue I have ran up against is it is harder and harder to get VHS players. New models are almost non-existent so you have to troll Craigslist or eBay for used equipment.
Also, the market for good digitizing hardware is drying up too.
The moral, hurry up and digitize now before the equipment use for this purpose drys up.
I have a few dozen tapes to convert for a medical professional (treatment documentation -- with waivers signed where used in a public context). I'd also like to convert a few personal tapes, e.g. of my great aunts discussing times out on the old farmstead.
Unfortunately, the tapes have some age/wear issues. And the fancy-schmancy Sony DVD/VHS combo recorder/player I was given promptly turns these -- which play well enough directly to the TV -- into extreme pixelation.
So, any equipment/procedure recommendations would be greatly appreciated.
GoDaddy supports the bill because they love taking a contrary positions on an given issue. They see most of the industry taking X position on SOPA and so they take Y position. Corporate wise, they just seem to get off on it.
'cause SSDs are not as reliable as they appeare to be to withstand the server-side workload. Enterprise-grade SSDs are significantly more expensive than consumer-grade ones. You are not looking at $1~2/GB price, but $10~20/GB. Given the capacity required for most use cases, SSDs are hardly good choice for critical servers as primary storage.
In addition, most RMDBS are optimized for mechanical disks. Optimization for SSDs becomes interesting only recently when the price of SSD drops to be barely reasonable.
> In addition, most RMDBS are optimized for mechanical disks.
Since SSDs blow the hell out of platters no matter what the workload or access pattern is, you'll still get significantly improved performances, even without SSD-specific optimizations.
The one "optimization" I'd like to see out of SSD's rise is deoptimization: since access patterns becomes less important (or at least naive access patterns become less costly), I'd like to see systems simplified and "optimizations" removed rather than new optimizations added.
We (bu.mp) use a lot of SSDs at our datacenter.. we've probably used ~100 64GB x-25e, and recently we have added 20+ Micro P300 disks.
The first thing we used to do is try to convince the hardware raid controller not to do anything clever, like readahead etc, b/c seek times are practically meaningless. Despite our efforts at disabling every optimization we could control that was tailored for rotational platters, we still found that software raid (linux md) outperformed a classically great hardware controller--perhaps by virtue of being "stupider".
So that is our go-to configuration now: Micron P300 SLC, 200GB drives, with md raid.
I have a lot of friends using 'consumer grade' SSD for their DB workloads and the difference is night and day.
This might be a horrible example but my one friend had a installation of FileMaker Pro running on a completely tricked out XServer (RAID with 15K SAS drives). With 250 concurrent users he was completely max out on CPU and many queries took minutes to complete.
When he moved to a 256GB SSD his CPU load now never goes above 20% and not one query takes more than five seconds, period.
Also, please reference my old HN post from over a year ago.
In my experience, SSD work great in RAID-5 or RAID-6 setups, even for database workloads (blasphemy!). In fact, 6 or 8 consumer SSDs in a RAID-5 array will put your huge FC array of 15k drives to shame.
I have a lot of personal success stories with SSDs but here's my current favorite.
A few months ago I had to help one of our scientists (the company is called 5AM Solutions.. they the awesome) run a bioinformatic job written in Perl and R. As it turned out, for long stretches of the processing the job required around 20 GB of memory. The one server that had all the required dependencies installed had only 8 GB at the time.
When I let the job run the first time, it started to page out memory to hard disk. The job ran for about four days, was only about 25% complete and during that time frame the server was un-useable for any other functions. Pretty much everything came to grinding halt.
Between that first run and the time our new RAM would be installed, just for grins, I gave the system 30 GB of swap space on the locally attached SSD. With that configuration the job finished in 19 hours and during that time the server was still responsive of other tasks.
When we finally added the appropriate amount of physical RAM the job took only 15 hours to complete.
It is the first time I have ever seen virtual memory be useful.
Virtual memory is what lets us write programs pretending that we own the entire address space, and it is very useful.
Swapping pages to disk, though, has been useful for a very long time. Yes, once your high-performance application starts swapping all the time, your performance is going to suffer by several orders of magnitude. But occasionally swapping pages in and out of disk is part of what makes modern operating systems useful. You left a large PowerPoint presentation open for several days, but never got around to working on it? Not a problem, since if the OS needs that memory, it will just swap out the pages. Without that ability, the OS would need to go around killing processes. (Which it will do if it has to, but it's a rare event because it can swap out pages.)
On modern systems, there are two kinds of addresses. "Virtual" addresses and physical addresses. Virtual addresses are tracked by the operating system, and they can span the entirety of the addressable address space. So, on a 32-bit system that isn't playing any high-memory tricks, that's 0 - 2^32, or 0 - 4 GB.
But your system may not have 4 GB. So the operating system has a data structure called a page table that has the virtual to physical mapping for each process. The processor accesses this table (it caches it in something called a TLB) so that it can convert the virtual address to the physical address.
An example using small numbers. Your program has a pointer to data. That pointer may have value 800. Let's assume that the amount of memory on your system is only between 0 - 400. So the processor has to convert the value 800 to a value between 0 - 400. It's the operating system's job to maintain that valid mapping.
Why does this matter, and why is it so tied up with paging to and from disk? Let's say the OS pages out the page containing that data. Then, later, it's paged back in, but in a different physical location in memory. Your program still has the pointer value 800, but your program still works correctly because the operating system keeps track of where in physical memory 800, for your process, maps to.
People in the Windows world often say "virtual memory" when they mean "swap space" because Windows would call the amount of swap space "virtual memory size." But virtual memory is the technique described above. Read the Wikipedia entry from above, or read an operating systems textbook for a full discussion of it.
That's not entirely correct. The MMU generally handles virtual to physical memory address translation and the OS is only ever involved if there is a page fault. Outside of OS architecture and very specific and intended application, virtual/physical memory is completely transparent. When I hear "virtual memory" I assume reference to swap space unless otherwise noted because the technical meaning has such a specific domain.
That's why I noted that the CPU caches the mappings in the TLB. On modern processors, the MMU is integrated with the rest of the processor, so I didn't see the need to introduce another TLA. It's a part of the processor just as much as, say, the floating point unit is. The whole point of my discussion with small pointer values was to demonstrate that the virtual to physical mapping is transparent.
When I hear "virtual memory," I think of the computer science meaning. However, I am a researcher in high performance computing systems.
Still, an SSD makes this all better. When I was using virtual memory on a Ubuntu server and put the swap partition on an SSD, everything worked great. On a rotating platter, not so much.
Oh, no question, SSDs are an improvement. I'm just clarifying the systems concepts involved. SSDs everywhere may change some assumptions in the operating system. For quite a while now, we've considered paging to "disk" as performance death, and have gone through contortions to make good decisions about which pages should be paged out. If paging to "disk" everywhere gets several orders of magnitudes cheaper, we may want to do less up-front processing trying to determine good victim pages.
It is entirely possible to have a virtual memory design with less virtual address space than physical address space (eg: 32-bit virtual, 34-bit physical), and virtual addressing would still be useful in this context.