Jump to content

ChrisBaksa

Member
  • Posts

    230
  • Joined

  • Last visited

  • Donations

    $0.00 

Posts posted by ChrisBaksa

  1. Hello All,

    I have a scenario that I'm toying with and I'm trying to get some answers as to what the final outcome will be. let me first set the stage. I have not actually played with Hyper-V yet which is why I am asking these questions and I have no way to actually test due to lack of hardware.

    I run a dell Poweredge 2950 (72 gig mirrored boot volume and 146 gig x 6 Raid 5 volume for data). It has windows 2003 R2 64 bit installed and VMware Server Ver 2. I run 3 VM's on this box (DC, Exchange/Web, App server). The Host itself also acts as my file server. The Raid 5 Data drive contains all the VM files as will as all my Media, Application library, User data, etc... I also have 2 external USB/Firewire mass storage devices attached to this box for addition storage and data backup.

    With that said, my plan is to rebuild this box with Windows 2008 and Hyper-V. I would build a temp box and move my VM's to it while I rebuilt the Dell with 2008 and Hyper-V. This way my environmet would still be up and running. The Raid 5 data drive would remain intact with all it's data. I would then convert my VMWare VM's to Hyper-V VM's.

    So in theory, when I install Hyper-V the host OS now becomes a "virtual" but is still the Controll OS. My question is... Will the Control OS still see the Raid 5 Data volume as it did before I installed Hyper-V? Remember I used the Host OS (in this case the control VM) as my file server. I would simply reshare/repermission my data at this point and I'm done. But if the Control OS does not see the volume as it's own, I loose the functionality that I had and I waste alot of high performance RAID disk space. This is why I dont want to install VMWare 3i. I would loose all the space that I currently use as a fileserver forcinng me to move all that data to an external (slow) storage device.

    Does anyone know how this will play out? The hardware is fully capable of running a Hypervisor, and my end goal is to do just that. What I dont want to do is embed all my file server data into a VM. If I have a problem with the VM, I'm screwed and teh data will most likely be lost. But if the Machine will not boot for what ever reason, I can boot off a pe disk and get my data.

    What do you think?

    Thanks in advance

    Chris

  2. Thanks Br4tt3,

    I am finding that alot of people are not using any of MS's tools (like the MDT). I've been in contact with MS as well as several other individuals in my situation and the Unattend.xml is simply generated line by line and checked against the WAIK before the code is rolled into production. One company I spoke with is using a generic unattend.xml and is doing a search and replace for strings.

    Chris

  3. I plan to do this for our unattended installs. We currently do this for Windows 2000 and Windows Server 2003 OS versions for unattend.txt, so I plan to extend it for Windows Server 2008 as well for unattend.xml.

    Simple VB code to create a FileSystemObject we use is as follows:

    Set objFS = CreateObject("Scripting.FileSystemObject")

    Set objNewFile = objFS.CreateTextFile("c:\unattend.xml")

    objNewFile.WriteLine "<?xml version=" & chr(34) & "1.0" & chr(34) & " encoding=" & chr(34) & "utf-8" & chr(34) & "?>"

    objNewFile.WriteLine "<unattend xmlns=" & chr(34) & "urn:schemas-microsoft-com:unattend" & chr(34) & ">"

    objNewFile.WriteLine "< <servicing>"

    etc...

    objNewFile.Close

    (as you can't use the double quote charater directly, you have to substitutue it for " & chr(34) & " in the code so that when it writes out the file it fills in the correct character.)

    Just produce and validate your original unattend.xml file in Windows System Image manager from WAIK and then use it as the template for the VBscript to dynamically produce.

    Ok.. here is the big question.

    First let me describe my process.

    I PXE boot to WINPE...

    I Partition and format drive...

    I Use ImageX to deploy a custom Image (I used the Sysprep /OOBE /Generalize /Shutdown switches and then created my image)

    I Now I copy my PNP drivers to the drive (unattend.xml has the locations in it)

    Now my Unattend.xml is created via scripting and has my regionalization, IP address, pnp driver paths and other information.

    What do I do with the file? Does it get rename?

    Where does it go so when WInPE rebots and teh host boots up it is used and builds with those settings?

    I assume it has to be placed somewhere on the host's hard drive. But where?

    I can't find this information anywhere.

    Thanks.

    Chris

  4. I was wondering if anyone was dynamically creating the Unattend.xml for their Windows 2008/Vista installs.

    I'm looking for information on how people are creating the unattend.xml thru scripted methods. Specifically... I'm looing to do this thru VB Script.

    It seems Micosoft has made this process a nightmare. The old Unattend.txt was easy.

    Aything helps.

    Chris

  5. Same exact situation here.

    I can use a stand alone PE cd that prompts for input or I can build via winPE pushed from a PXE server.

    You enter the information on a web page and it gets stored in a Database. Once PE is loaded, it collects the data from the DB.

    The same scripts support both building methods.

    I support 52 server models across HP, Compaq (legacy boxes) IBM, Dell and Vmware.

    Chris

  6. Has anyone experienced a problem running imageX.exe from within VB from PE 1.6?

    I have a VB script that is "supposed to" lay down the WIM image.

    But when I call the exe form within VB it totally ignores the commad.

    I've echoed out the command line and it is good.

    and If I run it manually on the command line. It works.

    Chris

  7. Chris,

    do you not use dfs? that would make you bw usage go way down, since DFSR is R2 allows for just the changes to be repilicated to you sites?

    No.

    We wrote a syncronization tool that much more powerfull than dfs.

    We do have DFS integrated in the firm. But the the OS builds work in a very controlled manner.

    Development, Stageing and Production locations globally.

    A bad package could blow up a traders box and loos the firm money.

    Very tight change management.

    Chris

  8. i like to think it is efficent :) single image to to run every model of server that i manage around the globe, and since there is compression on the wim you save space vs have them on a share someplace, that share is going to have to replicate or copy so bw savings are minimal but that is another chance for errors in communication or transfer. hopefully i don't sound like an arse, just letting you know what we do currently, using a standard wim file with all drivers inject for the model servers we support (6 currently, all from HP) that build is replicated along with applications and and anything else needed globally via dfs to 43 different locations where we have sms server. from there the build can be run through WDS on any PXE bootable server. the build currently gets sequenced through the microsoft deployment toolset so we have mutliple install roles availabe depending on when the server roles will be. i have one build here in HQ that is our master or "gold" image that when we need add new drivers or replace old driver i open the wim update/remove drivers, then set back in the dfs share. through the power of dfsr, we replicate the bits that change and now have updated images. just the way i go about managing our image. it works great for our situation and i thought i would share what we have done with it!

    I'm not doubting your setup. However my change control process would not play nice with this.

    We have a weekly sync schedule where my changes are synced to about 40 distribution points around the world from a master location in the States.

    The Wan Traffic generated by the WIM files to Europe/Tokyo/India is very large in that model.

    And thats just the WIM files, The application packages also get synced during that process.

    My design is to leave the WIM files unchanged (for the most part). This reduces the need for those large transfers.

    My front end will have choices for the roles and the XML file will be dynamically built based on the selections (stored in a DB for one click rebuilds)

    If you think about it, the Master image really only needs to be able to access the disk. Everything else is a plug and play discovery of drivers.

    By keeping the drivers seperate in a file structure, I have the ability to copy only what I need. Nothing more.

    Syncs are much smaller and faster and I have the ability to version the drivers by hardware model.

    I have run into instances where one driver crossed Manufacturer platforms and was incompatable eventho it was the same model/partnumber.

    Thinking outside the box... the driver file structure I maintain also servers for other applications. P2V, V2P, BareMetal restore, etc...

    By pointing these apps/processes at the file structure, they can use the same production drivers in their discovery.

    This way I know exactly what driver is certified by me to be used in our environment.

    My 2003 Build support 49 models of hardware across 5 manufacturers. The 2008 build will start out supporting about 5 models.

    Chris

  9. Is it possible to build a unattended 2008 that looks for drives in a OEM driver path like you could with previous versions of windows?

    Thanks

    the preferrable path woudl be to inject your drivers using PEimg into the WIM file you will be using

    check out the WAIK at microsoft downloads

    Preferable. Maybe... But not very effecient or manageable.

    In a large Enterprise firm, It is much easier to keep the drivers seperate (out of the WIM) and copy them when needed.

    Most of us have a process to copy only the necessary drivers based on teh Hardware.

    This will allow your Gold image to remain unmodified thus saving larger ammounts of data having to be copied to remote locations (across the WAN)

    Driver versioning is easier and the WAN traffic is reduced to the drivers that changed not the entire WIM.

    The cost of the process is an addition reboot as you prep the host to use the newly copied drivers.

    Chris

  10. Hi All,

    I am beginning my Server 2008 Build. As I only work on Server OS's I have no knowledge of how to automate Vista installs.

    Does any one have any good links or docs on how to do automated Installs of 2008 Server?

    Anything will help.

    I have to prepare images for STD and ENT edition that can run on multiple vendors hardware.

    PE is easy... Thats pretty much done.

    Drivers are easy... Done as well.

    I'm specifically looking for information on making the WIMS's, manageing them, and how to gen the XML file that replaces the Unattend.txt.

    All of my work will be in VB Script. But I am open to Powershell as well.

    Chris

  11. The only thing I don't like is that you have to hit F12 once your startrom.0 kicks off. I never have been able to figure out how to get rid of that prompt, but I rarely use 1.6 for one old prototype server that doesn't meet the requirements for PE2.0.

    To get rid of pressing F12 you need to manually copy a new startrm off the windows Media CD and rename it.

    The directions are in the WINPE manual.

    Chris

  12. Anyways, I have a WinPE iso, about 150MB in size I am trying to get to boot from the PXE server. I have searched high and low on Google trying to figure out how I can PXE boot an iso image and the items I've tried don't seem to work. Has anyone gotten WinPE or an ISO to boot from a PXE server? The closest I have been was getting the booting............................ message but it just sat there forever and I manually killed it.

    You can boot the .WIM over PXE. You dont even need WDS for that, any combination of TFTP and DHCP server works. It's fairly good documented in the WAIK manual how you setup a TFTP share for booting WinPE over PXE. (Windows PreInstallation phase - phase 4)

    BTW: just to be clear, we are talking about Windows PE 2.0, right?

    Good Question.

    I'm on WinPE 1.6 (2005)

    No plans to go to 2.0. No reason. Too much incompatability with drivers.

  13. Hi all,

    I have created win PE CD and customised it to run one of user application through it. I am able to acheive this successfully.

    Can anyone let me know is this drive name from PE i.e. X: can be changed? How to map local computer drive's with others in PE CD(as shown in below table).

    Also inform me is it possible to run any package/installer through PE CD.

    In the current PE CD Creation steps,We are creating a Virtual Drive (X:) but this X: Drive is already present in one of my Machine, so in this condition we have we have to perform mapping of drives with the actual Drives Present in the System and Drives Displayed while PE Execution. This mapping may be as shown below:

    Hard Disk Drives: ----> mapped to Windows PE drives

    C:----> C:

    D:---->D:

    E:---->E:

    F:---->F:

    G:---->G:

    H:---->H:

    I:---->I:

    S:---->J:

    V:---->K:

    X:---->E:

    Requesting you to get back on this at the earliest. Thanks in advance.

    Thanks and Regards,

    Abhijit

    X is the Drive letter assigned by WINPE to it's OS volume.

    I dont think you will be able to reassign that unless you hack it up. I strongly dont't recommend it.

    Chris

  14. You can also put up your own no-authoriative dhcp server.

    In our setup. (no dhcp at all in the datacenters) a linux dhcp server will assign an ip to a mac address (and only that mac address) when told to do so.

    We have processes that enable it and disable it.

    The routers IP Helpaers are set to point to this host for TFTP.

    This way dhcp is avaliable on demand.

    Chris

  15. Wild, I just built something similar, but I used the DriverPacks program/method, heavily customized/rewritten.

    Mine reads the BIOS, determines the machine type, based on that, it walks an INI, if it finds a match it ROBOCOPY's the filenames listed in the INI to the typical DriverPacks location I use. The files listed are 7Zip packages I build, and are simply 1 file per driver. So, 1 file is for a modem for that specific model, 1 for audio, 1 for the chipset, etc... then I just let DriverPacks take over and do the rest, works perfectly everytime. The other cool thing, no duplicate drivers, I keep a manifest of what drivers I've already packaged, and if a different model computer uses the same hardware, I just copy/paste that package info from 1 spot in the INI to the corresponding spot.

    Works wonders!!

    Nice. I would love to see that snipped of code.

    The expanded driver layout I use also serves for other applications/processes.

    P2V, V2P, Image restores, Image restores to dissimilar hardware.

    By having the drivers all expanded, the other apps can detect and use them when they are executed.

    Kills 2 birds with one stone.

    Chris

  16. You know you could just slipstream all your hotfixes right?

    Yup. You could.

    However, at our firm all applications and hotfixes are packaged.

    All servers get the cummulative hotfix packagee applied. So I really dont need to manage the OS binaries by slipstreaming.

    This gets me out of the game of what hotfixes were applied and what was replaced. It's all done in the hotfix package.

    Again... a few extra minutes for install... but much less management in the long run. No confusion.

    Br4tt3 - You mentioned the PNP path limit.

    I got around that. I keep and INI file that has a heading with the model. Each line is a path to the required PNP driver for that model.

    I have a script that walks the section of the ini and build the PNP path for that model only (dynamically)

    In the same step it Xcopies the folders to the local host before setup is run.

    This mean I get only the drivers I need and nothing more in a very controlled fashion.

    This works for Unattend as well as Sysprep (images). you just need to reseal when you go the sysprep method.

    Example:

    [HP DL 360G5]

    SSD=Compaq\SSD\7.91

    Network=Compaq\Network\Broadcom\NC37xx-380x\3.4.10.0

    Network=Compaq\Network\Intel\NC61x-71x\8.9.1.0

    Network=Compaq\Network\Broadcom\NC67x-77x\10.39.0.0

    Network=Compaq\Network\Intel\NC360T\9.7.38.0

    Network=Compaq\Network\Emulex\2.40.a2

    Not much in this example as the Product Support Pack (SSD's) do the rest.

    Chris

  17. I believe you can use the Command line to set the nics to FD.

    It's not pretty tho. you can put this in in your startnet.cmd so it happens right after PE comes up.

    I use a tool that we wrote in house that configs the speed and duplex of each nic.

    Chris

  18. Funny... I tell people that I do Unattended installs and they look at me funny. "Really? Why not Images?"

    Then I tell them that I support...

    49 models of Hardware across 4 Different Manufactures (HP, IBM, VMware, Dell) times 3 different Operating systems... (win2k3-32 bit, win2k3-64 bit, win2k)...

    5 different Service Pack versions... (win2k3-sp1 (32 and 64 bit), win2k3-sp2 (32 and 64 bit), win2k-sp4)...

    6 different OS versions - win2k3 Standard (32 and 64 bit), win2k3 Enterprise (32 and 64 bit), win2k Standard and Win2k Advanced.

    ... and follow that up with... Who has time for images?

    Oh did I mention that I am the ONLY person in my company that engineers automated Server OS installs?

    Actually... In my environment, images are a bottle neck as thay have to be copied to many distribution servers around the globe.

    Unattended installs take a bit longer but are much easier to manage and cost alot less in bandwidth.

    Why copy gigs of image data when I can just update drivers when necessary (megs or less). The Install Binaries never change.

    Once the OS is on... It's all post scripting.

    Hotfixes are installed as part of the post scripting piece.

    The big win here is that everything is installed ther 1st time... every time. There is no roll back or cleanup of an image.

    Clean install everytime. I call this "Tier 1" thinking. Every host is a fresh build whith no margin of error.

    Net net... on a newer server... I can get the OS, hotfixes, and baseline Applications (driver support packs, Antivirus, monitoring software, etc...) on a server is about 40 minutes.

    The only thing I don't automate is the nic teaming due to the multitude of possible configs we use.

    The question was raised about version consistancy...

    All applications are packaged (hot fixes too) so I know exactly what is going to be on the box and how it will be configured.

    Chris



×
×
  • Create New...