Jump to content


  • Posts

  • Joined

  • Last visited

  • Donations

    0.00 USD 
  • Country


Everything posted by ChrisBaksa

  1. Have you tried Daemon Tools? http://www.daemon-tools.cc I know it works under Vista and is 64 bit compatible. WinRar will also open an ISO. Chris
  2. Thanks. That is what I was hoping for. Chris
  3. I'm sure you already thought of this but I'll throw it out anyway... Do you have any other account with admin level privileges that you can log in with? You can then change the admin Password.
  4. Hello All, I have a scenario that I'm toying with and I'm trying to get some answers as to what the final outcome will be. let me first set the stage. I have not actually played with Hyper-V yet which is why I am asking these questions and I have no way to actually test due to lack of hardware. I run a dell Poweredge 2950 (72 gig mirrored boot volume and 146 gig x 6 Raid 5 volume for data). It has windows 2003 R2 64 bit installed and VMware Server Ver 2. I run 3 VM's on this box (DC, Exchange/Web, App server). The Host itself also acts as my file server. The Raid 5 Data drive contains all the VM files as will as all my Media, Application library, User data, etc... I also have 2 external USB/Firewire mass storage devices attached to this box for addition storage and data backup. With that said, my plan is to rebuild this box with Windows 2008 and Hyper-V. I would build a temp box and move my VM's to it while I rebuilt the Dell with 2008 and Hyper-V. This way my environmet would still be up and running. The Raid 5 data drive would remain intact with all it's data. I would then convert my VMWare VM's to Hyper-V VM's. So in theory, when I install Hyper-V the host OS now becomes a "virtual" but is still the Controll OS. My question is... Will the Control OS still see the Raid 5 Data volume as it did before I installed Hyper-V? Remember I used the Host OS (in this case the control VM) as my file server. I would simply reshare/repermission my data at this point and I'm done. But if the Control OS does not see the volume as it's own, I loose the functionality that I had and I waste alot of high performance RAID disk space. This is why I dont want to install VMWare 3i. I would loose all the space that I currently use as a fileserver forcinng me to move all that data to an external (slow) storage device. Does anyone know how this will play out? The hardware is fully capable of running a Hypervisor, and my end goal is to do just that. What I dont want to do is embed all my file server data into a VM. If I have a problem with the VM, I'm screwed and teh data will most likely be lost. But if the Machine will not boot for what ever reason, I can boot off a pe disk and get my data. What do you think? Thanks in advance Chris
  5. Thanks Br4tt3, I am finding that alot of people are not using any of MS's tools (like the MDT). I've been in contact with MS as well as several other individuals in my situation and the Unattend.xml is simply generated line by line and checked against the WAIK before the code is rolled into production. One company I spoke with is using a generic unattend.xml and is doing a search and replace for strings. Chris
  6. Ok.. here is the big question. First let me describe my process. I PXE boot to WINPE... I Partition and format drive... I Use ImageX to deploy a custom Image (I used the Sysprep /OOBE /Generalize /Shutdown switches and then created my image) I Now I copy my PNP drivers to the drive (unattend.xml has the locations in it) Now my Unattend.xml is created via scripting and has my regionalization, IP address, pnp driver paths and other information. What do I do with the file? Does it get rename? Where does it go so when WInPE rebots and teh host boots up it is used and builds with those settings? I assume it has to be placed somewhere on the host's hard drive. But where? I can't find this information anywhere. Thanks. Chris
  7. I was wondering if anyone was dynamically creating the Unattend.xml for their Windows 2008/Vista installs. I'm looking for information on how people are creating the unattend.xml thru scripted methods. Specifically... I'm looing to do this thru VB Script. It seems Micosoft has made this process a nightmare. The old Unattend.txt was easy. Aything helps. Chris
  8. Same exact situation here. I can use a stand alone PE cd that prompts for input or I can build via winPE pushed from a PXE server. You enter the information on a web page and it gets stored in a Database. Once PE is loaded, it collects the data from the DB. The same scripts support both building methods. I support 52 server models across HP, Compaq (legacy boxes) IBM, Dell and Vmware. Chris
  9. I figured it out... Need more coffee. Chris
  10. Has anyone experienced a problem running imageX.exe from within VB from PE 1.6? I have a VB script that is "supposed to" lay down the WIM image. But when I call the exe form within VB it totally ignores the commad. I've echoed out the command line and it is good. and If I run it manually on the command line. It works. Chris
  11. Has anyone integrated the Windows 2008 6f bit Hyper-V nic into WinPE 2005 (1.6)? I'm having a bit of trouble finding the correct driver. 32 bit works fine. 64 bit is my issue. This is PE 1.6 not 2.x Chris
  12. No. We wrote a syncronization tool that much more powerfull than dfs. We do have DFS integrated in the firm. But the the OS builds work in a very controlled manner. Development, Stageing and Production locations globally. A bad package could blow up a traders box and loos the firm money. Very tight change management. Chris
  13. i like to think it is efficent single image to to run every model of server that i manage around the globe, and since there is compression on the wim you save space vs have them on a share someplace, that share is going to have to replicate or copy so bw savings are minimal but that is another chance for errors in communication or transfer. hopefully i don't sound like an arse, just letting you know what we do currently, using a standard wim file with all drivers inject for the model servers we support (6 currently, all from HP) that build is replicated along with applications and and anything else needed globally via dfs to 43 different locations where we have sms server. from there the build can be run through WDS on any PXE bootable server. the build currently gets sequenced through the microsoft deployment toolset so we have mutliple install roles availabe depending on when the server roles will be. i have one build here in HQ that is our master or "gold" image that when we need add new drivers or replace old driver i open the wim update/remove drivers, then set back in the dfs share. through the power of dfsr, we replicate the bits that change and now have updated images. just the way i go about managing our image. it works great for our situation and i thought i would share what we have done with it! I'm not doubting your setup. However my change control process would not play nice with this. We have a weekly sync schedule where my changes are synced to about 40 distribution points around the world from a master location in the States. The Wan Traffic generated by the WIM files to Europe/Tokyo/India is very large in that model. And thats just the WIM files, The application packages also get synced during that process. My design is to leave the WIM files unchanged (for the most part). This reduces the need for those large transfers. My front end will have choices for the roles and the XML file will be dynamically built based on the selections (stored in a DB for one click rebuilds) If you think about it, the Master image really only needs to be able to access the disk. Everything else is a plug and play discovery of drivers. By keeping the drivers seperate in a file structure, I have the ability to copy only what I need. Nothing more. Syncs are much smaller and faster and I have the ability to version the drivers by hardware model. I have run into instances where one driver crossed Manufacturer platforms and was incompatable eventho it was the same model/partnumber. Thinking outside the box... the driver file structure I maintain also servers for other applications. P2V, V2P, BareMetal restore, etc... By pointing these apps/processes at the file structure, they can use the same production drivers in their discovery. This way I know exactly what driver is certified by me to be used in our environment. My 2003 Build support 49 models of hardware across 5 manufacturers. The 2008 build will start out supporting about 5 models. Chris
  14. the preferrable path woudl be to inject your drivers using PEimg into the WIM file you will be using check out the WAIK at microsoft downloads Preferable. Maybe... But not very effecient or manageable. In a large Enterprise firm, It is much easier to keep the drivers seperate (out of the WIM) and copy them when needed. Most of us have a process to copy only the necessary drivers based on teh Hardware. This will allow your Gold image to remain unmodified thus saving larger ammounts of data having to be copied to remote locations (across the WAN) Driver versioning is easier and the WAN traffic is reduced to the drivers that changed not the entire WIM. The cost of the process is an addition reboot as you prep the host to use the newly copied drivers. Chris
  15. Hi All, I am beginning my Server 2008 Build. As I only work on Server OS's I have no knowledge of how to automate Vista installs. Does any one have any good links or docs on how to do automated Installs of 2008 Server? Anything will help. I have to prepare images for STD and ENT edition that can run on multiple vendors hardware. PE is easy... Thats pretty much done. Drivers are easy... Done as well. I'm specifically looking for information on making the WIMS's, manageing them, and how to gen the XML file that replaces the Unattend.txt. All of my work will be in VB Script. But I am open to Powershell as well. Chris
  16. To get rid of pressing F12 you need to manually copy a new startrm off the windows Media CD and rename it. The directions are in the WINPE manual. Chris
  17. You can boot the .WIM over PXE. You dont even need WDS for that, any combination of TFTP and DHCP server works. It's fairly good documented in the WAIK manual how you setup a TFTP share for booting WinPE over PXE. (Windows PreInstallation phase - phase 4) BTW: just to be clear, we are talking about Windows PE 2.0, right? Good Question. I'm on WinPE 1.6 (2005) No plans to go to 2.0. No reason. Too much incompatability with drivers.
  18. X is the Drive letter assigned by WINPE to it's OS volume. I dont think you will be able to reassign that unless you hack it up. I strongly dont't recommend it. Chris
  19. You can also put up your own no-authoriative dhcp server. In our setup. (no dhcp at all in the datacenters) a linux dhcp server will assign an ip to a mac address (and only that mac address) when told to do so. We have processes that enable it and disable it. The routers IP Helpaers are set to point to this host for TFTP. This way dhcp is avaliable on demand. Chris
  20. Nice. I would love to see that snipped of code. The expanded driver layout I use also serves for other applications/processes. P2V, V2P, Image restores, Image restores to dissimilar hardware. By having the drivers all expanded, the other apps can detect and use them when they are executed. Kills 2 birds with one stone. Chris
  21. Yup. You could. However, at our firm all applications and hotfixes are packaged. All servers get the cummulative hotfix packagee applied. So I really dont need to manage the OS binaries by slipstreaming. This gets me out of the game of what hotfixes were applied and what was replaced. It's all done in the hotfix package. Again... a few extra minutes for install... but much less management in the long run. No confusion. Br4tt3 - You mentioned the PNP path limit. I got around that. I keep and INI file that has a heading with the model. Each line is a path to the required PNP driver for that model. I have a script that walks the section of the ini and build the PNP path for that model only (dynamically) In the same step it Xcopies the folders to the local host before setup is run. This mean I get only the drivers I need and nothing more in a very controlled fashion. This works for Unattend as well as Sysprep (images). you just need to reseal when you go the sysprep method. Example: [HP DL 360G5] SSD=Compaq\SSD\7.91 Network=Compaq\Network\Broadcom\NC37xx-380x\ Network=Compaq\Network\Intel\NC61x-71x\ Network=Compaq\Network\Broadcom\NC67x-77x\ Network=Compaq\Network\Intel\NC360T\ Network=Compaq\Network\Emulex\2.40.a2 Not much in this example as the Product Support Pack (SSD's) do the rest. Chris
  22. I believe you can use the Command line to set the nics to FD. It's not pretty tho. you can put this in in your startnet.cmd so it happens right after PE comes up. I use a tool that we wrote in house that configs the speed and duplex of each nic. Chris
  23. Funny... I tell people that I do Unattended installs and they look at me funny. "Really? Why not Images?" Then I tell them that I support... 49 models of Hardware across 4 Different Manufactures (HP, IBM, VMware, Dell) times 3 different Operating systems... (win2k3-32 bit, win2k3-64 bit, win2k)... 5 different Service Pack versions... (win2k3-sp1 (32 and 64 bit), win2k3-sp2 (32 and 64 bit), win2k-sp4)... 6 different OS versions - win2k3 Standard (32 and 64 bit), win2k3 Enterprise (32 and 64 bit), win2k Standard and Win2k Advanced. ... and follow that up with... Who has time for images? Oh did I mention that I am the ONLY person in my company that engineers automated Server OS installs? Actually... In my environment, images are a bottle neck as thay have to be copied to many distribution servers around the globe. Unattended installs take a bit longer but are much easier to manage and cost alot less in bandwidth. Why copy gigs of image data when I can just update drivers when necessary (megs or less). The Install Binaries never change. Once the OS is on... It's all post scripting. Hotfixes are installed as part of the post scripting piece. The big win here is that everything is installed ther 1st time... every time. There is no roll back or cleanup of an image. Clean install everytime. I call this "Tier 1" thinking. Every host is a fresh build whith no margin of error. Net net... on a newer server... I can get the OS, hotfixes, and baseline Applications (driver support packs, Antivirus, monitoring software, etc...) on a server is about 40 minutes. The only thing I don't automate is the nic teaming due to the multitude of possible configs we use. The question was raised about version consistancy... All applications are packaged (hot fixes too) so I know exactly what is going to be on the box and how it will be configured. Chris
  24. Thanks! I just dloaded the file. I'll take a look see. Chris

  • Create New...