Jump to content

CoffeeFiend

Patron
  • Posts

    4,973
  • Joined

  • Last visited

  • Donations

    0.00 USD 
  • Country

    Canada

Everything posted by CoffeeFiend

  1. Good stuff! If I had a say in it, that's EXACTLY what I would have chosen! Well, why XP SP2? AFAIK the .NET framework 3 still runs on Win2k. Haven't had time to play with WCF yet (still using the "old" stuff, hoping to make the switch real soon), but it looks VERY, VERY promising! Either ways, since you're not a n00b, we can expect something decent (good solid n-tier design). Still curious about the DB side of things, are you going to code it all by hand, use an ORM like NHibernate, or codegen tools (codesmith/mygeneration + template), EntLib or something? What about logging? Log4net? Inquisitive and geeky C# types want to know! No such thing as too much details! Unit test wise, I'm personally thinking about switching from NUnit to MbUnit for most stuff (it's quite nice). One more question: what will be the license? Especially, is there a chance it would be open source at some point? In part or entirely? (Can't blame you either ways as I don't usually release my stuff as open source either, but I'd love to contribute if time permits - anything, from localization code, to Data Access Layers for other DB providers, to bugfixes, feature requests, commenting, unit tests, documentation, anything!). Honestly, I'd rather have a local server instead of having every client download straight from the source (much like WSUS works), but I do understand developing one more app & "layer" to the system would take more time, effort, and complicate things further. Wish I had time to do something like this from scratch, but I've got just too many pet projects already... Again, ideally there would be a forum or possibly a wiki to go with all this, but eventually it's a LOT of work to make it all happen... Edit: BTW, personally I wouldn't worry too much about the pre-XP OS'es. Win2k mainstream support is already over, and most businesses that haven't moved to XP yet (we have) are going to do it soon enough. And home users wise, I don't know anyone who still runs Win2k (a few ppl here might, but it's hardly representative of the average home user, whose new PC shipped with XP preinstalled). In a few months, a large percentage of folks will be running Vista anyways. Gotta look forward to the future a bit, even if that means leaving the older OS'es behind sometimes. Personally I don't go out of my way to support old OS'es.
  2. I guess I'm curious about the potential implementation of it too (language, db, everything!) Using BITS would be a good idea for sure (after all, it's not hard to use, there's wrappers for it, articles & sample code and all) Looks like you know what you're in for though (and starting with something simple-ish)
  3. You're absolutely right jcarle. I also code a few utils in my own time (embedded projects, some "regular" apps like an installer for my unattended setups, etc). Even for hobbyists, time is a major constraint. One only has so much free time. The time I'm not spending coding can be spent with the kids, going fun places, doing all kinds of activities together, visiting relatives, reading a good book, etc. Reinventing the wheel coding everything in lower level languages (and without the rich framework) means a LOT of time spent to recreate it (often in a lesser way) and far more code to write (and maintain/bugfix). Life's too short for that. I'd rather work to solve the problem quickly than working forever on the underlying low level implementation of everything. Between your apps using 5% extra resources and having no life at all, it's an easy call for most people. Funny how much people who don't know what obfuscation even is know how much it affects performance and even find it obvious... Looks like someone had nothing to backup his usual FUD after all... How surprising.
  4. Yes, to both questions? "Centralized" updaters are certainly a good thing, and you can say there's a demand for them (just look at WSUS, SMS and countless similar deployment/patching programs, Installshield Update Manager, etc). But as for being nuts, I'm afraid I can't say no. Do you realize everything that would be required for this to happen? -You need to write a client, that will detect the supported programs (list which must be updated from the web constantly), and then checks for updates/upgrades (must know what to look for - say, in an XML file with the app names/versions and such), download the files from... somewhere? (bandwidth's on you?) in a format that the udater will be able to use (often not in the way they're disrtibuted), then apply them in various ways (have to write a fairly advanced "installer"). -You will need server-side code (don't know what language you had in mind), with various public APIs for the clients to check for updates (personally I'd use web services) and download them. But that must work with one gigantic and complex database: a list of every version of every app supported, a list of every single update ever made for each of the previous table (products*versions), lists of files/patches/updates whatever (likely a very complex format, could be stored as XML), linkto or name of update, etc. -And as if that wasn't already enough work, you need a community behind it. Lots of volunteers. And you'll need to implement LOTS of stuff for this to work. You will need user accounts. A mechanism for people to submit updates - the info being in a complex format along with all the required files. And of course you can't just expect to let anyone submit anything and not have problems. You would need people to beta-test the updates before they're made public (testing phases). You might need a forum for people to interact (discuss what went wrong with update X for app Y version Z, what do to, or explain how they've done it, etc). And for all this, you'd need to add tons of stuff to the previously mentionned database: lists of users and lists of user groups, plus the relationships (UserID -> GroupID / "group memberships"), and everything needed for a forum and such, and everything required to have some sort of beta testing of updates (a workflow: create new update -> beta testing phase -> problems so back to submitter for fixing, back to beta testing, testing OK -> made public) and all the tables and code to support this (and people to update their submissions), etc. It would require a large and complex database. A rather large app with fairly complex architecture (even if omitting the forum). And it would have to scale very well if you get a significant amount of users. And it could use lots of bandwidth too. Eventually you'd need funding (perhaps a business model?) to keep this going (hosting/bandwidth costs). And there could be legal aspects of this, depending on who's hosting the updates (can't distribute anyone's content without approval or such). And that's assuming a simple server on the internet and clients for single PCs. It would only become more complex if you want people to locally run servers (a bit like WSUS) on their LANs to have people install off that (and have the server sync with your server over the internet). It's not a bad idea, but I think you don't realize who much work this would require - short and long term. It's not a 1 person project, much less one you can just accomplish in a few spare weekends.
  5. "We" as in "you". Seemingly you don't consider obfuscators as obfuscators... Or perhaps you don't understand what they do or how they work? Entity renaming is the very base of what obfuscators do (they ALL do this!) - renaming namespaces/classes/methods/properties/fields/enums (the metadata). Much harder to understand this way already, especially if it's not easily readable (just numbers, or using weird unicode chars; lots of identical/duplicate names where possible, etc; and if it confuses and crashes ILDASM, then that's a bonus). There are obfuscators out there that do basically just that. Again, sidestepping the benchmark part, because it does NOT make any noticeable different AT ALL, just like the average exe packer (actually, the average exe packer is has way more impact, but do we see people complain about that? Heck, it made no noticeable difference on a 286). The ONLY thing that's different is the MS IL, which perhaps makes a ~1% difference on disk size (not relevant, might just fill a bit more of the empty part of the last cluster it uses). Memory/CPU usage wise, it makes NO DIFFERENCE AT ALL once it's JIT'ed (100% identical output) - and again, the JIT doesn't care if it's human readable names or not. Do a quick test, you'll see it changes nothing. Again, if it's so obvious, you'll surely find benchmarks saying how slow it makes it, or you'll easily be capable to produce some. My FUD meter is reading off the scale... Perhaps it's more like you don't know the tricks modern compilers can do (branch optimizations, instruction scheduling, etc). And yes, I do use optimizing compilers (used GCC and IAR tonite alone, for C code, imagine that, and I also look at the generated code, and there's also an asm part - not x86 code, but the concepts are the same, and I know x86 decently well too). My P4 is a ghetto 519J with slow FSB and ghetto HP "Goldfish" motherboard (onboard video/audio/ethernet and all), paired with high latency slow RAM, no HT or anything like that (and the thing's running so much stuff at once/and in the background, you wouldn't believe it). There's nothing exceptional about this system. If you have a 4.1GHz P4, it's definitely far better than this one (unless it's a half core or something). More like "I want the latest OS with all the new features and cool stuff" or are they supposed to say "I want some ghetto old OS and the slowest hardware I can find"? The newer OS'es have all kinds of new and useful stuff, and I don't see why people wouldn't want the fastest hardware. Besides, if you want to blame anything for forcing people to upgrade all the time, try games. I use all kinds of apps, but what I said is about a developper's standpoint. It's not like I make programs for the sake of making programs. Money is relevant to 99.9% of end users (perhaps not the Oracle cutomers). The price at which you can make software available is totally relevant, and how fast you can get it out the door and such. Customers want something inexpensive (else they won't buy it; I also want an acceptable profit margin - gotta eat and pay the bills sometimes), and they want it right now. Getting it out the door in time and on budget isn't exactly optional for most of us. And using higher level languages and rich frameworks like I do does exactly what you say: it leaves me more time to do other things than just programming. Is all I care about money? No, but when you're making software - or any other product, you have such limits you can't ignore. If money was no object, I'm sure we'd have wicked nice cars too, amazing HDTVs, etc. But living in reality, we just can't ignore it. Given the same features, when given the choice, the client will usually pick the program that's half price (and is out the door in half the time) even if it uses 5% extra resources. Just like most people won't pay twice as much for a 5% nicer car that's out half a year late.
  6. You're just going to have to get used to it (or stick to an old version forever). But once you do, you won't want to go back. I think it's pretty slick. The concept of ribbons seems to be catching on too, MS even licenses the interface to other devs, so likely you'll see this used in many apps in the near future.
  7. It's not quite self-evident at all - much the inverse! Self evident that it doesn't. Goes to show how little you know about obout obfuscating. Basic obfuscating is mainly about renaming stuff (does not affect performace at all). More complex obfuscating has no more effect on performance than the average exe packer or such - basically none. When I say "evidence", I mean things like benchmarks that back those false claims (you won't find any, because it's just not true). Says the guy who brags about his "uber-optimized" 1337 MD5 code in asm. Yeah, it performs far better hashing MD5 in memory than a 10 year old intepreted language that's unsuited to the task does from disk. Hate to break it to you, but you probably shouldn't be writing assembler, and leave it to the pros. Yeah, everyone and their dog can write asm up to the MMX level. But past that, it takes real skill, knowledge, experience and all - things like using SIMD instructions, ordering instructions (makes a real difference in today's CPUs), keeping the pipelines full, preventing cache misses, etc. This is better leaved at people who do this for a living. Otherwise, you end up with something that's in fact SLOWER than it would have been using a C/C++ compiler - or just about anything else, because most compilers nowadays are FAR better than the average wannabe asm hacker at most things. 350MB/s on a P4 4.1GHz is not something I'd brag about. The way you're talking, this should be quite a bit faster than C/C++ implementations - it's written in asm to be faster than C/C++, and it's "highly optimized". Too bad it loses against .NET. The default .NET MD5CryptoServiceProvider (and you always whine about how inefficient .NET is) beats that hands down! Repeatedly 370MB/s+ on my P4 3GHz (example source code on request). The default Java ways are also faster (and especially implementations like this one). Too bad it also loses to the libraries like Crypto++ too (written in C++). The point being, likely, you shouldn't be writing asm anymore. Every compiler out there seem to be better at it than you (well, except VB6, but I wouldn't brag about that). And the cryptography classes on higher-level languages are writen by expert and perform quite well.Sure, you can still write code that makes smaller executables, but that hardly matters at all (I sure don't give a F about a few KB of disk space), and perhaps a couple MBs of memory usage than higher level languages, but it's hardly an issue anymore. We're not running 486's with 4MB RAM where saving every byte one could was required. Performance of computationally intensive parts is important though (and your code fails at this). There's no point in having to write your own libs in asm, hand optimizing them, bug fixing them, maintaining them and all (taking lots of hours, costing lots of $ in the processs, etc), if all they accomplish is be slower, waste time, cost money and is platform-specific. And even if performance was ~10% better in the first place (it ISN'T), it would still make hardly any difference in most apps - 10% of the time your PC is hashing, but it won't help one bit for the other 90% of the time where it's wating for the HD, so your real gains in overall speed would be like 1%. There's no point in over optimizing something if there's no significant difference (something about profiling code, and optimizing what actually needs it) Java wasn't developped solely for portability. It was one goal (which they have achieved like you said), but you're willingly ignoring everything else. And they ARE pushing java for absolutely everything - for end user apps, for server middleware, for embedded devices, the dreaded applets, etc - far more than Microsoft is! FUD as always. There's no need to upgrade hardware for anything. If your PC can handle a modern version of windows (like win2k), it can handle .NET apps just fine. Unless you mean it'll force lusers to throw away their P2 400, but arguably that's a good thing. This is patently false. That's just like saying they just create a new version of windows to force people to buy new PCs - just too bad they don't see a penny from hardware sales. People have been whining about this for ages. The whine about how "Vista is bloated" is just this years' version. People said the same about every version of windows that ever came out. Win95 worked fine on my 3600$ P133 back then, on 486's, it wasn't nearly as fast, and on 386's (which LOTS of people still had), it was a bad joke. New software with new features doesn't run great on outdated hardware. But people quickly forget. Now that they have today's high-end software running on an OS that's like 6 years old (XP), it seems pretty fast. Your point is just as false as the bit about "being useless" that jcarle refuted. Explain how WUD, nLite, Paint.NET and countless others apps really force people to update. Or perhaps you meant that "mindless lusers" will upgrade (even if they don't need to)? You're talking like it's taking every extra bit of HD space we're getting nowadays. It doesn't exactly expand to fill it. You're talking like it's using 100GB out of a 200GB HD. The point is, space is cheap enough that using the required and reasonable space for new features is not a big concern anymore. We do more apps with less code, more apps with less time, more features with less time, and more apps with less costs - that matters FAR more than a hundred MBs of disk space nowadays. There's just no need to have MS word work on the same resources as edlin did though.
  8. Yeah, that's what it seems to come down to. Problem is, people don't seemingly know what the .NET framework is. Yeah, it uses HD space. The .NET framework 2.0 uses 150MB - which is a whole nickel worth of HD space at today's prices (some people sure make a very big deal out of 5 cents worth of storage). The space is used by common libs/classes/whatever so not every app has to replicate them, using more space in the end. And if you look at what you're getting, you'll see it's not exactly that big - there's other stuff "left there" - like the .NET framework installer itself (Microsoft .NET Framework 2.0\netfx.msi) - 25mb for that, and 7mb for the J# 2.0 Redist installer. It gets blamed for the space it takes partly because it doesn't "hide" its installer files like most other apps under C:\windows\Installer (a folder that's over 500MB on the box I'm writing this from). So we're already down to like 118mb for the framework itself. And out of that, you're getting a LOT of things: -several full compilers (csc for C#, vbc for VB, vjc for J# and jsc for JScript.NET, + the MS IL assembler and JIT) & accompanying dev tools and environment (MSBuild, etc) - how much space would that normall take using other companies' compilers? Yeah, a whole LOT more. -everything to have ASP.NET and Web Services working on your IIS server (to install everything, register stuff, handle page rendering/requests, compile them, http modules, etc) -everything that's needed to hookup to a database like SQL server (for several purposes) -a testing web server (WebDev.WebServer.exe) -security tools (like CasPol) and a web based authorization manager (AzMan front end) and many more things - along with the part that people seem to think that the .NET framework is (the CLR part) - a bit like the JRE runtime, which provides tons of rich, standard, consistent, well designed classes that perform quite well, garbage collection (mind you most people don't seemingly understand what that means). Classes for threading, IO, database access (ADO.NET, SqlClient, etc) and transactions, for WMI and performance counters, for localization of apps, for networking, remoting, crypto work of all kinds, directory services, mail, GDI+, messenging, caching, enterprise services, interop, RegExp, XML, etc - and all this via different programming languages. So it's not like you're only getting minor functionality that should fit in 5MB of DLLs. That sentence sums it up nicely. The adoption rate of .NET is very high, unsurprisingly. It's what MS are pushing [very hard] as next generation dev platform, and their dev tools, documentation and all reflect this. Lots of the nice and new technologies are .NET only (like WPF a.k.a Avalon). Vista is a step towards that too. Those who don't want of .NET are better migrating off windows ASAP [to whatever OS they please], because it's definitely here to stay.
  9. Reasonably priced for big businesses, definitely. But for home users that's way overpriced. The areca is cheaper too (310$). I just checked, and the cheapo 4 port Sabrent SATA RAID (PCI, yeah) are 33$. They have a PCI-e version that's cheap too, but it's only 2 ports. Hopefully they make a 4 or 8 port version some day...
  10. Not necessarily. MSIL is pretty simple (basic assembly is, too). So it depends on the obfuscator and what it does - some have pretty good tricks up their sleeve. But then again, I don't use an obfuscator alone either (signatures, asymmetric encryption, DPAPI, etc). You could always NGEN your stuff too. I don't want my code to be easily read (like an open book) by ILDASM. But most of these programming related questions are better answered on relevant newsgroups. There has been much talk about all this, many many times unsurprisingly, often involving some very intelligent people (many MVPs, experts, book writers, etc). Most people asking programming questions here ought to try newsgroups instead (I don't even look in the "programming" section anymore) And as I expected, the usual anti-.NET troll showed up, with his usual FUD. Obfuscating hardly makes any difference in performance (can you back that? didn't think so). And who said it's inefficient in the first place? (oh yeah, you're a anti-.net troll - everything in anything else than asm or C++ is inefficient). And there's tons of people who do want apps, which often require the .net framework (e.g. nlite). Enough FUD already, quit your jihad, no one cares about what you think about .NET. Don't like it? Don't use it.
  11. Well, I wouldn't go as far as saying "don't bother". But then again, people have been saying the same about the countless "unbreakable" exe-wrappers/encrypters. Does that mean one shouldn't bother at all? Not necessarily. You could say the same about every other software protection of any kind too (serials, online/phone activation, HASPs, etc), and even all forms of DRM, as someone who knows enough and who's determined enough can, and WILL break it. You could extend that analogy to padlocks and deadbolts too. They'll keep the average person from walking away with your stuff as they were passing by, but it won't stop anybody who really wants to. Does that mean one shouldn't bother with locks of any kind? (no padlock at the gym, no deadbolts on the house, no keys needed for your car, your bank not using safes, etc)? I don't think something not being perfect is a reason to not use it. Being reasonably good is often all that's needed. Locks don't stop thieves, they just keep the honest people honest. If people don't do anything at all to protect their software, you know NOBODY will pay for it, they'll just copy it (pirates will pirate it regardless). Combine that with advantages for people who pay, like support and various perks (could be anything - extras/samples/newsletters/whatever with useful content, cheap upgrades, chance to win something, listening to their feature requests, etc) and perhaps annoyances for those who don't (have to disable auto update, must block with firewall, find crack for every new weekly build - half of which fail hidden checks, etc), and some people will pay. Locks and protection/obfuscation aren't ironclad, but I still use 'em. Anyways, he can make his own choice. And there are lots of obfuscating products to choose from: -Dotfuscator -Salamander .NET obfuscator -9Rays.NET / Spices.Obfuscator -Aspose.Obfuscator -CodeVeil (obfuscation and encryption) -Demeanor for .NET -dotNet Protector -Dynu .NET obfuscator -CliSecure -Xenocode Postbuild -Deploy.NET (encryption - not obfuscation) -LSW-IL-Obfuscator -Skater .NET Obfuscator -{smartassembly} And most likely others... Combined with other methods/techniques if used properly (like signing and such), it can make a pretty decent protection. As for {smartassembly}, I've never used it, so no idea.
  12. I never considered such services before, so it was definitely worth looking into - at least to see if I've been wasting hundreds of $s for nothing Here's their plans for data backups for 1-5 PCs. 30GB/month (I could perhaps live with 10x that) is 800$/year (or 900$ if you pay monthly). The 5-200 PCs plan starts @ 90$/month for 5 PCs max (+50$ setup fee), 10GB max per PC. Their SBE plans aren't much cheaper. So, not cheap (800$ - the price of one year of service - buys over 2TB of HD space), and requires a LOT of upstream bandwidth - something ISPs limit very much. Mind you it's still an interesting option. It might be very worthwhile for small businesses that have a few computers, where they locally keep a small amount of data (ms office documents and the like), or to backup documents off a small server perhaps. Cheaper than buying backup hardware and software, hiring a consultant to set it all up, having someone change the tapes and send them offsite, etc. Hassle free too. But for most home users, it's more expensive than buying a couple cheap spare HDs, backuping on that, leaving the 2nd at someone else's place as an offsite copy (in case of fire/flood/theft or whatever).
  13. Honestly, I'll take anything over WD drives - they're the only brand I won't buy at any price. We've seen FAR too many go bad (not just talking about number of units, but ratios, in a large corporate setup) I've had similar problems at home (and with friends' & family's PCs too), and never really have with other brands (except the notoriously bad batches that we've all seen - like the 6.x-8.x GB Fujitsus, 5 platter IBM "deathstars", etc). I do not trust my data to WD drives. These days I buy Seagate drives mostly (but also have other brands). 3ware makes good stuff though, but they're not the best IMO. I prefer the Areca cards (PCI-e, does RAID6, available with up to 24 SATAII ports, very fast, has staggered spin up, good drivers for all OS'es, etc) But regardless of which card is better, hardware SATA RAID cards are WAY too expensive for home use, especially when you have like a couple dozen HDs. Either I can buy cheapo software RAID cards (like 50$/4 ports to supplement those on the motherboards), or I could just use one ARC-1280 card - but it's like 1600$USD (+extra for cache upgrade, battery, tax, ship, etc). About 2600$CAD with 1GB cache & battery, with tax and shipped (~110$/port). Since most of what I do doesn't really require high performance (e.g. streaming video @ 1mbit to XBMC), cheapo Sabrent Silicon Image PCI cards work just fine (~12$/port, ~10% of the price). It's an easy decision (2600$ vs 300$), unless one truly needs the speed at any cost. It would be a nice xmas gift though (one can dream!)
  14. Et à toi aussi. What's with all the spy talk? Je suis pas d'ici non plus, mais je me d[alt-130]brouille pas mal (je ne vis pas a Montreal, donc j'ai pas vraiment le choix). Actually, it's more like frenglish (or is that franglais?), using just one or the other takes some effort Alright! I'm looking forward to localized versions of your utilities then! If you need a frenglish translation, I'm your man (J/K, I'd stick to the english version regardless). I always wished to visit Vancouver at some point, looks like a very nice place (I'm from the other coast). Very good points though. And merry xmas to you too, even though it likely doesn't mean the same thing to both of us. I see it as an occasion to spend time with the family and to spoil the kids (and perhaps to over-indulge on food), and no religious meaning.
  15. I know! I also have the older PATA only version (was like 9$ on special at one point, still works great for all PATA drives). Well worth the 30$ for sure, and yes, a very handy tool (no need to open case, reboot, etc - just plug and go). As for making a lot of sense... Let's say I've been down that road before (large video server, music, tons of photos including many thousands of 12MB raw files/panos/large PSD files/etc, many databases, a few SCMs- lots of code, lots of documents and ebooks, etc - a few TBs). Unfortunately, there is no really good/practical solution to backups - HDs being more or less the least worst solution really. Bad enough that even big places are doing this too (70GB RAID array for backup). And unfortunately there seems to be nothing really better coming up. Newer backup solutions that come out take forever to become afforadable, and by the time they are, the media seems tiny e.g. beta-ray drives are 1000$, and it already would take me like 3 full spindles of the media to backup my stuff, @ 25$/disc no less. By the time they're afforable (2 years or so?), I might need twice that (32 extra blanks needed for every pair of 400GB drives you buy). Having so many bloody HDs does feel somewhat ridiculous though (but then again, thousands of DVDRs or hundreds of tapes would be no better). So many HDs (and associated noise & heat) that I'm dreaming of a nice and expendable iSCSI SAN instead (pretty bad when you wish coolermaster stackers had more space for drives, I need like, shelving or something). I had high hopes for InPhase's holographic stuff a while ago, but the way it's going, we'll still be waiting in 2015. By the time it's out with its big discs (big by today's standards) and affordable (the drives are supposed to cost like 10k$), the 500$ Dells will likely ship with 2TB HDs (using perpendicular recording or even flash).
  16. Too bad it's a scam? (or alternately, see this instead) When things look too good to be true, it usually is. And it didn't just look improbable, more like totally absurd and impossible. More than 10x the data density of DVDs on wood pulp? Yeah, OK! A 8.5x11" / A4 sheet, printed @ 600dpi (with no errors or such), 24bit color (not that printers or scanners are anywhere NEAR this accurate), border to border (no margins), gives like 100MB tops. Having this work in the real world is already a fairy tale. And he supposedly gets what... Just 4500x that? Even with 1200dpi (more than the vast majority of scanners/printers can do), that would still be just 400MB. Realistically (especially due to color calibration/gamut/ink fading/etc), you can likely only get 1/3 of that (less if you add some margins). So he's really more like 15000x over a realist expectation. Him having 45 seconds of video stored on paper isn't exactly a huge accomplishment. @ average xvid/mp3 bitrates (say, ~875 for xvid & 128 for mp3 - or 1mbit total), that's 750KB of data. That would require printing an A4 sheet @ 89DPI (plain black dots, no colors) - let's make it 100dpi with some borders. You could triple the video bitrate (having more or less DVD quality if a decent codec is used) and it would still be easy. In other news, I've got this really nice bridge for sale, located in Brooklyn.
  17. Well, external hard drives are still... hard drives. That didn't really change anything - it's the same media. Personally, I wouldn't necessarily use one of those though. A LOT of enclosures are notorious for overheating, usually killing their onboard controller, but sometimes the drive too (especially those without fans). Being made of metal is hardly any help. I've given up on these a while ago, and haven't even bothered to RMA the last one. And if you're going to need several drives for backing up, you could use something like this USB cable/adapter.. It does pretty much all HD types too (PATA/SATA, all sizes). It's surely cheaper to buy one of these than a half dozen [unreliable] enclosures over time - and it really comes in handy. Eventually you could opt for eSATA... It's certainly going to be FAR cheaper too. 30$ once for a good tool that might be getting a lot of use, plus actual storage costs, using whatever drives currently have the best GB/$ ratio when you need to buy some. That enclosure you pointed to is 350$USD, which works out to 400$CAD (+tax, 1.25GB/$ before tax). For that price, you can buy 3 320GB HDs @~110$ea (2.9GB/$; 960GB total) and have money left, or a couple 500GB WD HDs (if you overlook the price of the cable; 2.39GB/$ - you seem to trust WD drives). As the bigger drives get cheaper, you get bigger ones. Buying those in enclosures is ALWAYS going to be ridiculously expensive (the enclosure is seemingly costing 200$ extra) As for tape drives, I've looked at that a while ago, and it's definitely not worth it! There's 2 main options: 1) buy a cheap tape drive (possibly 2nd hand/off ebay or such) - like a DLT IV tape library ("changer"). But then you have to buy a gazillion small tapes. I have a few TBs to backup already, and it keeps expanding (lots of it being stuff that doesn't compress well, like AV). Also, the transfer speeds SUCK. Personally, It would have taken me at least 100 tapes to start with, and my initial backup - assuming the tape library always has empty tapes to use, and works 24/7 - would have taken over a month. And the price of those small tapes isn't much lower than HDs to start with. The only advantage this has (over #2) is a cheaper drive (but then again, how reliable?). Too many tapes (where does one store hundreds of tapes?), and WAY too slow. 2) Much like #1, but with a newer/nicer tape drive. Like someone mentioned before, say, a LTO3. First off, you'll need a nice SCSI card and cables ($). Then the drive - good luck with that. over 2000$ 2nd hand, and more like 4000$ new (possibly more for a library), e.g. the HP StorageWorks Ultrium 960, which is ~4850$USD + tax and ship. A pretty hefty initial investment for almost anybody! That enough money right now to buy fifty 320GB HDs (16TB of backup). And you haven't bought tapes yet! The tapes cost ~90% of what HDs cost, in terms of $/GB. So buy the time that ~10% difference pays for a 4850$ drive... You'd have to buy 48500$USD of storage to break even, or the equivalent of about 440 320GB'ers (140TB). There are a couple advantages to this solution though - far less tapes than #1, more reliable drive, and faster transfer speeds. To backup ~2GB this way, you'd need 6 tapes and the drive, which is already ~7000$CAD with tax (+ ship, and + the SCSI card and cables). As for next generation optical media (e.g. beta-ray), it may store more than good ol' DVDs per disc, but the price is ridiculous. Last I checked, 25GB discs were ~25$ (1$/GB), which is more expensive than Hard Drives. And require you to buy a 1000$ drive for a format that could be dead in a year. Hopefully you don't make coasters either! And I'd be worried about scratches and such, and media longevity. Might be worth considering in a couple years, if it's still around, once prices have dropped, and that we know that one format or the other is here to stay. Until then, it's not even worth considering (much like the InPrise holograhic vaporware). Long story short, hard drives: -have very good sizes (no need to change media all the time, and store/catalog it all) -have very good transfer speeds (the best of all solutions) -has good longevity, unlike optical media in general -isn't scratch prone (dropping it may not be a good thing though, but rarely happens, unlike scratches) -don't require a specialized drive/reader/writer that costs a lot of $$$, nor SCSI stuff -have a very good price for the capacity ($/GB or GB/$, whichever way you want to look at it) - and will remain so (if newer tapes/optical media comes out with better $/GB, you need a new drive/reader/whatever - with HDs, you just buy whatever has the best ratio when you need some) -doesn't require a hefty initial investment -doesn't need expensive specialized backup software (like tapes), and is supported by almost all programs (from backup/sync apps, to archivers, etc) -is accessible almost instantly (tapes has to seek whatever you need to restore first, so even if the speeds are OK for some, there are delays) -is a simple technology (no SCSI cards, SCSI IDs, termination, drivers, etc), and you can connect it to any old PC with PATA/SATA ports, or via cheap USB/FW adapters and enclosures (ubiquitous)
  18. Pointless, one is FAR better off with Multiple IE for that (it's v4, 5, 5.5 and 6 in one small download, free, no time-bombs, unobtrusive and all). No need to run a RAM-hogging virtual machine just for a web browser.
  19. Exactly! In a tiny system with a couple drives that you hardly ever change/modify, it's easy to do, and have it stay that way. His case is virtually empty. None of my boxes have less than 4 HDs (still a bunch of PATA ones left) and a couple optical drives. Lots of PCI cards (extra NICs, sound cards, DVB cards, RAID cards, SCSI cards, you name it). And a bunch of extras (like the cables for a SB Live 5.1 Live Drive, SCSI cards for scanner, memory card readers or LS120 or such things...) It quickly becomes a huge mess in a very crammed case - there's hardly any place for your hands. And even if I did spend lots of time making it all neat, I'd be cutting all the tie-wraps like tomorrow (next time I have to dig in there). Now if you have a bunch of such systems, and all kinds of stuff going with it: keyboard, mouse/trackball, 2 or more monitors, KVM switch, ethernet switch, 5.1 speaker set(s) with separate amp, earphones, UPS, laserjet, MCE remote, Intuos, mp3 player docks, external memory card readers, flatbed scanner, film scanner, a power bar or three, cable modem, embedded dev boards and such equipment (various mcu & eprom programmers, etc) plus other stuff you're using with it (stepper motors, breadboards, sensors, you name it), various lab equipment... Add to that the few odd PC parts, cables, 2 cameras (and related stuff), a couple spidles of burnable media laying around, remotes, a couple telephones, and more of that kind of stuff, and you've got a GIGANTIC mess - and not just the absolute spaghetti-like mess of cables behind the desk(s)! There's stuff all over the place too. Only once you manage to maintain all this in a tidy manner for any lenght of time can you tell others "cable management!" (or how someone else's desk should be all nice and tidy)
  20. Talking about heat sinks, there's probably enough of them for the system to hover: 2 for the 2 CPUs, and 3 on the motherboard (5 in total). The thing's just covered by heat sinks. Along with power usage, it's a sure thing you don't have to heat during winter, and get crazy AC bills during summer... Especially if combined with nowadays's power-hungry video cards! There's only one nice thing about the whole setup: 12 SATA ports onboard (I've dreamed of that for a while - the more, the better). But then the RAM is a big downside. It needs all 4 slots filled, and if you want to upgrade, you can't add any, throw away the old ones and buy bigger sticks. Maybe it's just me, but in a high-end 2x dual core rig with 1000$ CPUs, I'd like 8 memory slots (hard to fit on a normal sized board for sure, if even possible). Especially when the board has a MSRP of 480$! (enough to buy a nice E6400 and nice motherboard by itself), so almost 1500$ with the FX-74 (now add 4 sticks of RAM and everything else). As for "real" quad-cores, I doubt it will really deliver much more speed - especially on the AMD side, as HT is already pretty fast, and CPUs already have their own RAM banks. It's already widely used like that by Operon systems and seems good enough for the job (and scales well). Faster interconnects are only going to help so much. Either ways, K8L better be a LOT nicer than this.
  21. And it seems to be not all that great. I was hoping for AMD to come up with a match for the Core 2 Duo, but seemingly it's not faster, it's not cheap (100$ less for a pair of FX-74 than a QX6700, but the pair is slower) , and uses twice as much power (595W for the system tested). I doubt the motherboards will be cheap either, like all dual socket motherboards, so that "100$ less" might not help much. Doesn't use AM2 either (short lived socket seemingly) http://www.hothardware.com/viewarticle.aspx?articleid=911 http://www.extremetech.com/article2/0,1697,2065493,00.asp http://enthusiast.hardocp.com/article.html...oZW50aHVzaWFzdA Maybe it's just me, but I had expected more of it.
  22. It would be easy to make a 600 page long list -photography stuff is always nice (but generally expensive - the stuff I want at least) -puter stuff (but then again, I've never seen anyone ever offer "PC parts" for xmas) -useful kitchen gadgets (good luck finding something I don't already have!) -coffee stuff! A Rancilio Rocky would be nice (replacing my Baratza) but on the expensive side -a JTAGICE mkII. Expensive too. And the odds of getting that for xmas are less than none (that and all kinds of WAY-overpriced lab stuff, like a high-end LeCroy DSO ) -Power Tools (a.k.a. toys for grownups) -a laser HDTV (ok, not for sale yet), trip to Yurop, sports car, large mansion... ok, I'll stop now. The ridiculously overhyped collection of Apple iTrash is a the very top of my "unwanted" list Unless it also comes with a sledgehammer and glad bags, or a small catapult-like contraption and a box of shotgun shells (add a Beretta Xtrema2 for extra fun). The only thing I really want and that I'm definitely getting is quality time with all the family (but then again I'm paying for the trip). Besides, I'm more worried about what the kids want (pretty long list)... So much shopping to do, so little time!
  23. If you shop around a little bit, you'll be able to find some Skystar 2 cards for that price (or otherwise a Twinhan).
  24. I don't think it's just a "difficult to configure" issue. It seems like a different concept (the GUI) than winamp, and it just feels weird. I've tried it a few times, but never liked it much. And as for sounding better, then perhaps you can backup that claim? I can do bit perfect/accurate Kernel Streaming & ASIO output. I don't see how foobar could be better. (And DFX in winamp does make it sound far better for some music types). I like Winamp, mostly for the same reasons Zxian mentioned.
  25. I might have argued that embedded video performance is good enough for most uses, but I certainly didn't recommend getting one either. There are many reasons to buy a dedicated video card, even if onboard video performs good enough for most office work and such: you may one day want to run Vista w/ Aero Glass (you may not want or need Vista right now, but you might change your mind eventually). Or use nice apps which uses WPF (that's the main reason for me to buy a card). Or you might want to play the occasional new-ish game. And even if it's not the case, if you're going to tie a significant part of your RAM for video memory (not necessarily the case though), then a dedicated video card just might be cheaper. Having your video card use 256MB of a 1GB DDR2 kit that costs like 200$ with tax is not a very good idea, when a basic card with 256MB of RAM can be had for around 50$ anyways, leaving more memory to your system, and having better 3D performance to boot. It's about future-proofing your system to some extent.
×
×
  • Create New...