<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Computing | The End of the Tunnel</title><link>https://development--vigilant-hodgkin-644b1e.netlify.com/categories/computing/</link><atom:link href="https://development--vigilant-hodgkin-644b1e.netlify.com/categories/computing/index.xml" rel="self" type="application/rss+xml"/><description>Computing</description><generator>Source Themes Academic (https://sourcethemes.com/academic/)</generator><language>en-us</language><copyright>© 2019 Derek Murawsky</copyright><lastBuildDate>Wed, 29 Aug 2018 12:05:52 -0400</lastBuildDate><item><title>Rebuilding the Homestead’s DNS with Consul, DNSMasq, and Ansible</title><link>https://development--vigilant-hodgkin-644b1e.netlify.com/post/rebuilding-homestead-dns/</link><pubDate>Wed, 29 Aug 2018 12:05:52 -0400</pubDate><guid>https://development--vigilant-hodgkin-644b1e.netlify.com/post/rebuilding-homestead-dns/</guid><description>
&lt;p&gt;My friend Jason recently posted an update on his blog over at &lt;a href=&#34;https://peaksandprotocols.com/home-network-dns-infrastructure/&#34; target=&#34;_blank&#34;&gt;Peaks and Protocols&lt;/a&gt; about redoing his home network’s DNS setup. This reminded me that I really needed to do an update on my own recent DNS rebuild, which was based around &lt;a href=&#34;https://www.hashicorp.com/&#34; target=&#34;_blank&#34;&gt;Hashicorp&lt;/a&gt;‘s &lt;a href=&#34;https://www.consul.io/&#34; target=&#34;_blank&#34;&gt;Consul&lt;/a&gt;, &lt;a href=&#34;http://www.thekelleys.org.uk/dnsmasq/doc.html&#34; target=&#34;_blank&#34;&gt;DNSMasq&lt;/a&gt; and &lt;a href=&#34;https://www.ansible.com/&#34; target=&#34;_blank&#34;&gt;Ansible&lt;/a&gt; running on some &lt;a href=&#34;https://www.raspberrypi.org/&#34; target=&#34;_blank&#34;&gt;Raspberry Pi 3&lt;/a&gt;s. Overkill? Probably. But if you can’t have fun with your home network, what’s the point? On to the setup…&lt;/p&gt;
&lt;figure&gt;
&lt;a data-fancybox=&#34;&#34; href=&#34;images/homenet.png&#34; &gt;
&lt;img src=&#34;images/homenet.png&#34; alt=&#34;&#34; &gt;&lt;/a&gt;
&lt;/figure&gt;
&lt;h2 id=&#34;consul&#34;&gt;Consul&lt;/h2&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;a data-fancybox=&#34;&#34; href=&#34;images/consul-logo.png&#34; &gt;
&lt;img src=&#34;images/consul-logo.png&#34; alt=&#34;&#34; width=&#34;100&#34; &gt;&lt;/a&gt;
&lt;/figure&gt;
Consul started life as a distributed service locator and key-value store. It has grown significantly over the years and is now becoming a full-fledged service mesh. It allows for any server to register and provide one or multiple services, with simple config files or api calls. Further, Consul supports the idea of multiple locations natively and even has health checks. This means it will give you your local, healthy service endpoint.&lt;/p&gt;
&lt;p&gt;One of the main reasons I chose Consul is because it makes itself available via DNS as the .consul domain. Want to know where your git server is? dig git.service.consul. Your documentation hosted on a webserver somewhere? dig docs.service.consul. This makes finding a service you have running somewhere trivial, and means never having to update a DNS zone file again.&lt;/p&gt;
&lt;p&gt;Another reason, which I’m not using yet, is that it has a solid key-value store. This is great for storing configuration settings for distributed applications. There are a ton of tools that take advantage of this, and even provide dynamic reloading capabilities to the app when a key is changed in Consul.&lt;/p&gt;
&lt;h2 id=&#34;dnsmasq&#34;&gt;DNSMasq&lt;/h2&gt;
&lt;p&gt;In order to take advantage of Consul’s DNS features you need a DNS server that can point to Consul for just that domain, while passing through all other traffic to a normal DNS resolver. I chose DNSMasq for this because it is simple and well understood. There were some security issues with it last year, but they have since been addressed. I may migrate to &lt;a href=&#34;https://nlnetlabs.nl/projects/unbound/about/&#34; target=&#34;_blank&#34;&gt;unbound&lt;/a&gt; in the long run, but DNSMasq is fine for my use cases.&lt;/p&gt;
&lt;h2 id=&#34;ansible-putting-it-all-together&#34;&gt;Ansible &amp;amp; Putting it All Together&lt;/h2&gt;
&lt;figure&gt;
&lt;a data-fancybox=&#34;&#34; href=&#34;images/ansible-logo.png&#34; &gt;
&lt;img src=&#34;images/ansible-logo.png&#34; alt=&#34;&#34; width=&#34;100&#34; &gt;&lt;/a&gt;
&lt;/figure&gt;
&lt;p&gt;Ansible is the glue that makes sure I can redo this config easily should something happen to the PIs. It is a configuration management system that just works, with minimal extra craziness. I could go on for days about Ansible, and probably should write a dozen posts on it alone, but there’s so much out there already that I don’t feel the need. Bottom line is, this is the tool that sets up Consul and DNSMasq for me, and ensures that I can reset everything to a known working state in the event of configuration drift.&lt;/p&gt;
&lt;p&gt;I used several modules to help get this project running quickly.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/idealista/consul-role&#34; target=&#34;_blank&#34;&gt;idealista-consul-role&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/Oefenweb/ansible-dnsmasq&#34; target=&#34;_blank&#34;&gt;oefenweb.dnsmasq&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/geerlingguy/ansible-role-ntp&#34; target=&#34;_blank&#34;&gt;geerlingguy.ntp&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I ended up having to change some of the roles around to suit the raspberry pi environment, but otherwise it was fairly easy. I created my own baseline role which updates and upgrades and installs some packages, including python and its tools. This base role also creates a user account for me and Ansible itself. The first time I ran it, I had to pass parameters to login as the default Raspbian user, but after that it can run using the Ansible user instead.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#34;language-yaml&#34;&gt;- name: Update Apt and Upgrade Packages
apt:
update_cache: yes
cache_valid_time: 3600
name: &amp;quot;*&amp;quot;
state: latest
tags:
- packages
- name: Install Baseline Apps
apt:
name: &amp;quot;{{ packages }}&amp;quot;
state: present
vars:
packages:
- python
- python-pip
- python3
- python3-pip
- virtualenv
- python3-virtualenv
- python-pip
- dnsutils
tags:
- packages
- name: Install pi base python packages
pip:
name: &amp;quot;{{ packages }}&amp;quot;
state: present
vars:
packages:
- python-consul
- hvac
- name: Create Ansible management user
user:
name: ansible
comment: Ansible system user
group: admin
state: present
- name: Create dmurawsky user
user:
name: dmurawsky
comment: Derek Murawsky
group: admin
state: present
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;For my group_vars, I created a DNS.yml file with the needed variables for consul and DNSMasq.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#34;language-yaml&#34;&gt;# Consul Configuration
consul_version: 1.2.2
#consul_package: consul_1.2.2_linux_arm.zip
consul_server: true
consul_agent: true
consul_ui: true
consul_server_nodes:
- 192.168.1.2
- 192.168.1.3
# Services #
consul_agent_services: true
consul_services_register:
# Register NTP in consul
- name: ntp
port: 123
tags:
- udp
- name: dns
port: 53
tags:
- udp
# Hashicorp Vault
vault_version: 0.10.4
vault_pkg: vault_{{ vault_version }}_linux_arm.zip
vault_pkg_sum: 384e47720cdc72317d3b8c98d58e6c8c719ff3aaeeb71b147a6f5f7a529ca21b
# DNSMasq
dnsmasq_dnsmasq_conf:
- |
port=53
bind-interfaces
server=8.8.8.8
server=8.8.4.4
dnsmasq_dnsmasq_d_files_present:
cache:
- |
domain-needed
bogus-priv
no-hosts
dns-forward-max=150
cache-size=1000
neg-ttl=3600
no-poll
no-resolv
consul:
- |
server=/consul/127.0.0.1#8600
homestead-murawsky-net:
- address=/usg.homestead.murawsky.net/192.168.1.1
- address=/ns1.homestead.murawsky.net/192.168.1.2
- address=/ns2.homestead.murawsky.net/192.168.1.3
# NTP
ntp_enabled: true
ntp_manage_config: true
ntp_area: &#39;us&#39;
ntp_servers:
- &amp;quot;0{{ ntp_area }}.pool.ntp.org iburst&amp;quot;
- &amp;quot;1{{ ntp_area }}.pool.ntp.org iburst&amp;quot;
- &amp;quot;2{{ ntp_area }}.pool.ntp.org iburst&amp;quot;
- &amp;quot;3{{ ntp_area }}.pool.ntp.org iburst&amp;quot;
ntp_timezone: America/New_York
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And finally, the simple site.yml file.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#34;language-yaml&#34;&gt;- name: Configure System Baselines
hosts: all
roles:
- { role: baseline, tags: [&#39;baseline&#39;]}
- name: Configure DNS hosts
hosts: dns
roles:
- { role: ntp, tags: [&#39;ntp&#39;] }
- { role: dnsmasq, tags: [&#39;dnsmasq&#39;] }
- { role: consul, tags: [&#39;consul&#39;] }
- { role: hashivault, tags: [&#39;hashivault&#39;] }
&lt;/code&gt;&lt;/pre&gt;
&lt;h2 id=&#34;results&#34;&gt;Results&lt;/h2&gt;
&lt;p&gt;DNS resolution worked perfectly out of the gate as expected, but what about Consul?&lt;/p&gt;
&lt;p&gt;
&lt;figure&gt;
&lt;a data-fancybox=&#34;&#34; href=&#34;images/consul-screenshot.png&#34; &gt;
&lt;img src=&#34;images/consul-screenshot.png&#34; alt=&#34;&#34; width=&#34;600&#34; &gt;&lt;/a&gt;
&lt;/figure&gt;
Brilliant! Sure, the services that I have loaded are pretty simple and don’t really benefit from a service locator, but they’re examples of what is possible. Now I can register any new service by loading the consul agent onto the server and simply adding a definition file in the appropriate folder! This should make future expansion of services much easier.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; I currently have two consul servers. This is bad and not highly available. I have to get one more consul server online. Debating between another pi or putting on the home server.&lt;/p&gt;
&lt;h2 id=&#34;future-plans&#34;&gt;Future Plans&lt;/h2&gt;
&lt;p&gt;You’ll notice there’s no real security around the deployment above either. That needs to be fixed in terms of Consul ACLs, Vault, and password/key management for user accounts. There’s also a cool tool called &lt;a href=&#34;https://pi-hole.net/&#34; target=&#34;_blank&#34;&gt;pi-hole&lt;/a&gt; which is a dns level ad blocker that I want to integrate into my environment. I also plan on setting up Docker on my home server in the not too distant future to make it easier to host some fun services like &lt;a href=&#34;https://prometheus.io/&#34; target=&#34;_blank&#34;&gt;Prometheus&lt;/a&gt;, &lt;a href=&#34;https://grafana.com/&#34; target=&#34;_blank&#34;&gt;Grafana&lt;/a&gt;, &lt;a href=&#34;https://www.home-assistant.io/&#34; target=&#34;_blank&#34;&gt;HomeAssistant&lt;/a&gt;, and some other cool tools. I’ll also have to extend the network to my barn as the office is moving out there. Lastly, I want to build a portable lab that I can take with me when doing demos or presentation at local user groups.&lt;/p&gt;</description></item><item><title>Homestead Network Upgrades</title><link>https://development--vigilant-hodgkin-644b1e.netlify.com/post/homestead-network-upgrades/</link><pubDate>Sun, 22 Oct 2017 12:05:33 -0400</pubDate><guid>https://development--vigilant-hodgkin-644b1e.netlify.com/post/homestead-network-upgrades/</guid><description>&lt;p&gt;Despite coming from the networking side of IT, I tend to use regular consumer grade equipment at home. It typically just works, and I’m not looking for extreme reliability or features. I’ve been using hardware from Linksys, Netgear, and the other consumer network vendors for at least the last 10 years. Sometimes, though, things happen that make you reevaluate your previous life choices…&lt;/p&gt;
&lt;p&gt;For me, that thing was an email that I received from Verizon saying my router was infected with malware. Since I always take basic precautions like changing the default password and locking down external ports, I was a bit surprised. Turns out, there was a vulnerability in the firmware that had gone unpatched for months… In hindsight, I should not have been that surprised. At all. I thought I had purchased a flagship router that would be supported for at least a few years, but it didn’t look like any more patches were coming. Ever. I looked into trusty old &lt;a href=&#34;http://www.dd-wrt.com/&#34; target=&#34;_blank&#34;&gt;DD-WRT&lt;/a&gt; figuring that I could flash the router and at least get another year out of it, but apparently the R7000 has some performance issues with DD-WRT.&lt;/p&gt;
&lt;p&gt;After having issues like this a few times with generic consumer grade stuff over the years, no matter the vendor, I decided enough was enough. I researched available options in the enterprise hardware space (way too expensive and time consuming to set up), looked at open source alternatives (cheap, but time consuming, and not well integrated), and even looked at the more pro-level offerings from consumer manufacturers (underwhelming). After a few days, I decided on and purchased some &lt;a href=&#34;https://www.ubnt.com/&#34; target=&#34;_blank&#34;&gt;Ubiquiti&lt;/a&gt; hardware based on the many good reviews and a few personal recommendations from networking folks I respect.&lt;/p&gt;
&lt;figure&gt;
&lt;a data-fancybox=&#34;&#34; href=&#34;images/ubiquiti-logo.png&#34; &gt;
&lt;img src=&#34;images/ubiquiti-logo.png&#34; alt=&#34;&#34; width=&#34;150&#34; &gt;&lt;/a&gt;
&lt;/figure&gt;
&lt;p&gt;Ubiquiti’s hardware is solid stuff, performance wise, and they have a very good reputation. The hardware is what I would call “Enterprise Lite”, meaning it’s not Cisco, but its perfect for small to medium businesses who just want things to work. Additionally, the &lt;a href=&#34;https://unifi-sdn.ubnt.com/&#34; target=&#34;_blank&#34;&gt;Unifi configuration system&lt;/a&gt; and dashboard is excellent, taking a significant configuration and support burden off of me.&lt;/p&gt;
&lt;p&gt;The initial hardware purchase was:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://www.ubnt.com/unifi-routing/unifi-security-gateway-pro-4/&#34; target=&#34;_blank&#34;&gt;Unifi Security Gateway Pro&lt;/a&gt; (&lt;a href=&#34;https://www.amazon.com/gp/product/B019PBEI5W&#34; target=&#34;_blank&#34;&gt;Amazon&lt;/a&gt;)- I definitely went overkill here. The &lt;a href=&#34;https://www.amazon.com/Ubiquiti-Unifi-Security-Gateway-USG/dp/B00LV8YZLK/&#34; target=&#34;_blank&#34;&gt;entry model USG&lt;/a&gt; is capable of routing gigabit at near wirespeed. However, I decided that I likes the extra ports for a few future projects, like the barn office.&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://www.ubnt.com/unifi-switching/unifi-switch-8/&#34; target=&#34;_blank&#34;&gt;Unifi Switch 8, 60 Watt&lt;/a&gt; (&lt;a href=&#34;https://www.amazon.com/gp/product/B01MU3WUX1&#34; target=&#34;_blank&#34;&gt;Amazon&lt;/a&gt;)- Since the new network was not an all-in-one setup, I needed something to power the other devices around the house. This managed switch provided a lot more than just that, though. The VLANs will come in handy when we set up the home office.&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://www.ubnt.com/unifi/unifi-ap-ac-pro/&#34; target=&#34;_blank&#34;&gt;Unifi AP AC Pro&lt;/a&gt; (&lt;a href=&#34;https://www.amazon.com/gp/product/B015PRO512&#34; target=&#34;_blank&#34;&gt;Amazon&lt;/a&gt;)- Another bit of overkill for home use, but this one was easier to justify than the firewall. Simply put, it has more power, and I need that given the 2′ thick stone walls in the farmhouse.&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://www.ubnt.com/unifi/unifi-cloud-key/&#34; target=&#34;_blank&#34;&gt;Unifi Cloud Key&lt;/a&gt; (&lt;a href=&#34;https://www.amazon.com/gp/product/B017T2QB22&#34; target=&#34;_blank&#34;&gt;Amazon&lt;/a&gt;)- Though not strictly necessary, the Cloud Key allows you to run your network controller app on dedicated hardware. It can also be linked to the Unifi cloud portal allowing for a very convenient and secure hybrid cloud management platform.&lt;/li&gt;
&lt;/ul&gt;
&lt;figure&gt;
&lt;a data-fancybox=&#34;&#34; href=&#34;images/murawsky-homestead-physical.png&#34; &gt;
&lt;img src=&#34;images/murawsky-homestead-physical.png&#34; alt=&#34;&#34; width=&#34;300&#34; &gt;&lt;/a&gt;
&lt;/figure&gt;
&lt;p&gt;The hardware wasn’t cheap, but surprisingly, it wasn’t much more than I paid for the R7000 two years ago. If I had chosen the regular USG, the price difference would have been negligible.&lt;/p&gt;
&lt;p&gt;As for the setup, it was easier than I thought. I racked the USG Pro, plugged in the switch, then the cloud key. Thankfully I had already run the line to the wireless AP so that was easy. I also threw in a Raspberry Pi server for fun. It took about 10 minutes to patch everything together. But what about the configuration?&lt;/p&gt;
&lt;p&gt;Well, thanks to the Unifi software on the Cloud Key, I was able to “adopt” the other devices and have them configured in no time at all. My basic single vlan setup was ready to go out of the box. All totaled, I had the network up and running in 20 minutes. Time vs the R7000? Maybe an extra 10 minutes.&lt;/p&gt;
&lt;figure&gt;
&lt;a data-fancybox=&#34;&#34; href=&#34;images/unifi-dash.png&#34; &gt;
&lt;img src=&#34;images/unifi-dash.png&#34; alt=&#34;&#34; width=&#34;600&#34; &gt;&lt;/a&gt;
&lt;/figure&gt;
&lt;p&gt;What has it been like living with “Enterprise Lite” hardware at home? Fantastic. Having a useful dashboard that I can glance at to see the status of the home network is a perk I didn’t think I would care about, but I’ve used it several times already. The speed is true gigabit on wired, the wireless coverage is solid, and we don’t have random drops in connectivity anymore. And as for patches… I’ve already had two patches come through for stack. It’s a simple matter of hitting the upgrade button for the device, or setting up auto-upgrade. As far as I’m concerned, I’m never going back to consumer gear again.&lt;/p&gt;</description></item><item><title>To Export the Unexportable Key</title><link>https://development--vigilant-hodgkin-644b1e.netlify.com/post/export-unexportable-key/</link><pubDate>Wed, 01 Mar 2017 11:59:53 -0400</pubDate><guid>https://development--vigilant-hodgkin-644b1e.netlify.com/post/export-unexportable-key/</guid><description>
&lt;p&gt;Every now and then, you have to export a certificate in Windows, and someone forgot to check that little box to let you be able to do it… What is an enterprising SysAdmin to do? Enter &lt;a href=&#34;http://blog.gentilkiwi.com/mimikatz&#34; target=&#34;_blank&#34;&gt;Mimikatz&lt;/a&gt; (&lt;a href=&#34;https://github.com/gentilkiwi/mimikatz&#34; target=&#34;_blank&#34;&gt;source&lt;/a&gt;), a tool that lets you patch the Windows crypto api and do several cool (and frightening) things. The process is very simple.&lt;/p&gt;
&lt;h2 id=&#34;to-export-an-unexportable-private-key&#34;&gt;To Export an Unexportable Private Key:&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;Create a temp directory&lt;/li&gt;
&lt;li&gt;Download the latest version of &lt;a href=&#34;https://github.com/gentilkiwi/mimikatz/releases/tag/2.1.0-20170227&#34; target=&#34;_blank&#34;&gt;Mimikatz&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Extract the appropriate version (32 or 64 bit) to the temp directory&lt;/li&gt;
&lt;li&gt;Open an admin command prompt&lt;/li&gt;
&lt;li&gt;Change to the temp directory&lt;/li&gt;
&lt;li&gt;Run &lt;code&gt;mimikatz&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Type &lt;code&gt;crypto::capi&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;And finally type &lt;code&gt;crypto::certificates /export&lt;/code&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;You’ll see all of the certificates in the MY store exported into the temp directory in pfx format. The default password is mimikatz. Want another cert store? Perhaps, the computer store? Simply &lt;code&gt;run crypto::certificates /export /systemstore:LOCAL_MACHINE&lt;/code&gt;. Check out the &lt;a href=&#34;https://github.com/gentilkiwi/mimikatz/wiki&#34; target=&#34;_blank&#34;&gt;github wiki&lt;/a&gt; for documentation on this and other cool features of this powerful tool.&lt;/p&gt;</description></item><item><title>Authorized_Keys in Active Directory</title><link>https://development--vigilant-hodgkin-644b1e.netlify.com/post/authorized_keys-in-active-directory/</link><pubDate>Sat, 21 Nov 2015 17:09:02 -0400</pubDate><guid>https://development--vigilant-hodgkin-644b1e.netlify.com/post/authorized_keys-in-active-directory/</guid><description>
&lt;p&gt;Now that we are implementing more Linux systems, I’m noticing some of the pain points of keeping certain things in sync. A big annoyance, for example, is keeping our infrastructure and users’ SSH keys in sync across all of our machines. There are several methods currently available, but I had issues with each. I’ve listed the two main methods below.&lt;/p&gt;
&lt;h2 id=&#34;via-configuration-management&#34;&gt;Via Configuration Management&lt;/h2&gt;
&lt;p&gt;A very DevOpsy way of tackling the problem would be to us a configuration management system like Chef to keep the files updated. In fact, there are &lt;a href=&#34;https://www.chef.io/blog/2014/07/10/managing-users-and-ssh-keys-in-a-hybrid-world/&#34; target=&#34;_blank&#34;&gt;several examples&lt;/a&gt; of &lt;a href=&#34;https://forge.puppetlabs.com/tags/authorized-keys&#34; target=&#34;_blank&#34;&gt;this solution&lt;/a&gt; out there already. However, this seems a bit counter-intuitive to me. Why keep user account and related information in a config management system instead of a directory service? This is probably my Windows World bias, but &lt;a href=&#34;https://jumpcloud.com/blog/why-user-management-in-chef-and-puppet-is-a-mistake/&#34; target=&#34;_blank&#34;&gt;there are others that agree&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;via-scripts-dedicated-systems&#34;&gt;Via Scripts/Dedicated Systems&lt;/h2&gt;
&lt;p&gt;From simple &lt;a href=&#34;https://github.com/bronson/sshkeys&#34; target=&#34;_blank&#34;&gt;shell scripts&lt;/a&gt;, to &lt;a href=&#34;https://github.com/cloudtools/ssh-cert-authority&#34; target=&#34;_blank&#34;&gt;complex&lt;/a&gt; &lt;a href=&#34;http://sshkeybox.com/&#34; target=&#34;_blank&#34;&gt;systems&lt;/a&gt;, there are many ways to keep this data in sync. The simplest would appear to to be setting up NFS and pointing all users’ home directories there… But then you have to keep those NFS servers in sync and backed up across multiple sites, which can be problematic at scale.&lt;/p&gt;
&lt;h2 id=&#34;our-solution-ad-ldap-storage-of-ssh-keys&#34;&gt;Our Solution – AD/LDAP storage of SSH keys&lt;/h2&gt;
&lt;p&gt;To be up front, this was not my idea. There are many other folks who have &lt;a href=&#34;https://github.com/AndriiGrytsenko/openssh-ldap-publickey&#34; target=&#34;_blank&#34;&gt;implemented&lt;/a&gt; &lt;a href=&#34;https://code.google.com/p/openssh-lpk/&#34; target=&#34;_blank&#34;&gt;similar&lt;/a&gt; &lt;a href=&#34;http://itdavid.blogspot.com/2013/11/howto-configure-openssh-to-fetch-public.html&#34; target=&#34;_blank&#34;&gt;solutions&lt;/a&gt;. We are using this method specifically because we already have a robust AD infrastructure with all of our Linux authentication going through AD already (a post on this is soon to come). It probably doesn’t make sense for a group that already has a solid solution in, say, chef or puppet. For us, it did, and this is how we built it.&lt;/p&gt;
&lt;p&gt;First, we had to extend the Active Directory schema. This is not something for the faint of heart, but is also not something to be afraid of. I followed the procedure listed &lt;a href=&#34;https://www.balabit.com/sites/default/files/documents/scb-latest-guides/en/scb-guide-admin/html/proc-scenario-usermapping.html&#34; target=&#34;_blank&#34;&gt;here&lt;/a&gt; (after backing things up) and had everything ready to go in about 15 minutes. A note on the procedure: you do not need to use ADSIEdit to manage the custom attirbute afterwards. Just open AD Users and Computers and switch to the advanced view mode. Each item will then have an “attributes” tab in its properties page.&lt;/p&gt;
&lt;p&gt;Once the schema was extended, the fun began. OpenSSH supports a config variable called “AuthorizedKeysCommand”. This allows us to call an arbitrary script to pull the users authorized_keys file. This &lt;a href=&#34;http://serverfault.com/questions/653792/ssh-key-authentication-using-ldap&#34; target=&#34;_blank&#34;&gt;serverfault post&lt;/a&gt; got me going on creating a custom command, but the output of SED wasn’t clean enough. I whipped up the following script in perl to get everything working nicely. It binds to AD using a username and password and then pulls all sshPublicKey values from the specified user account.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#34;language-perl&#34;&gt;#!/usr/bin/perl
# Gets authorized keys from LDAP. Cleaner and supports any number of ssh keys, within reason.
# Requires Net::LDAP.
use Net::LDAP;
$BINDDN=&amp;quot;cn=service account,dc=example,dc=com&amp;quot;;
$BINDPW=&amp;quot;Password&amp;quot;;
$SEARCHBASE=&amp;quot;dc=example,dc=com&amp;quot;;
$SERVER=&amp;quot;domain or ip&amp;quot;;
$SearchFor=&amp;quot;samaccountname=$ARGV[0]&amp;quot;;
$ldap = Net::LDAP-&amp;gt;new( $SERVER ) or die &amp;quot;$@&amp;quot;;
$msg = $ldap-&amp;gt;bind( $BINDDN, password=&amp;gt; $BINDPW);
$result = $ldap-&amp;gt;search( base =&amp;gt; $SEARCHBASE,
filter =&amp;gt; $SearchFor,
);
while (my $entry = $result-&amp;gt;shift_entry) {
foreach ($entry-&amp;gt;get_value(&#39;sshPublicKey&#39;)){
print $_ , &amp;quot;\n&amp;quot;
} ;
}
$ldap-&amp;gt;unbind;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Once the script is created, it can be called by adding “AuthorizedKeysCommand /path/to/script” to the sshd_config file. I also had to set the script to run as root by using the “AuthorizedKeysCommandUser root” command.&lt;/p&gt;
&lt;h2 id=&#34;next-steps&#34;&gt;Next Steps&lt;/h2&gt;
&lt;p&gt;I want to improve this script in a few ways long-term…&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Since all of our Linux systems are part of our domain, there should be a way to have them bind to LDAP by using the machine’s Kerberos ticket. I don’t like using a username and password, but didn’t have the time to get the Kerberos bind working reliably.&lt;/li&gt;
&lt;li&gt;On the security front, this should be a TLS bind. No reason to have the data going over the wire cleartext.&lt;/li&gt;
&lt;li&gt;The script should not have to run as root…&lt;/li&gt;
&lt;li&gt;Cache the authorized_keys file on a per-user basis. We have a very robust AD infrastructure, but there is always a concern that it could become unavailable. The system’s resiliency would be greatly increased if it could cache the authorized_keys locally on a per-user basis, where sshd would normally look for it.&lt;/li&gt;
&lt;li&gt;Error Handling and Logging. It’s not fun, but it’s important. I wanted to get this solution out quickly, but it should be able to log to standard sources and handle some edge cases.&lt;/li&gt;
&lt;li&gt;Since the above is a lot of work, perhaps I can just improve a project like &lt;a href=&#34;https://github.com/jirutka/ssh-ldap-pubkey&#34; target=&#34;_blank&#34;&gt;ssh-ldap-pubkey&lt;/a&gt; to support Kerberos.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;external-links&#34;&gt;External Links&lt;/h2&gt;
&lt;p&gt;I found the following links quite helpful in generating this solution.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;http://serverfault.com/questions/653792/ssh-key-authentication-using-ldap&#34; target=&#34;_blank&#34;&gt;http://serverfault.com/questions/653792/ssh-key-authentication-using-ldap&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://www.balabit.com/sites/default/files/documents/scb-latest-guides/en/scb-guide-admin/html/proc-scenario-usermapping.html&#34; target=&#34;_blank&#34;&gt;https://www.balabit.com/sites/default/files/documents/scb-latest-guides/en/scb-guide-admin/html/proc-scenario-usermapping.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/jirutka/ssh-ldap-pubkey&#34; target=&#34;_blank&#34;&gt;https://github.com/jirutka/ssh-ldap-pubkey&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</description></item><item><title>Flexible Email Alerts for Logstash</title><link>https://development--vigilant-hodgkin-644b1e.netlify.com/post/flexible-email-alerts-for-logstash/</link><pubDate>Fri, 13 Nov 2015 17:00:47 -0400</pubDate><guid>https://development--vigilant-hodgkin-644b1e.netlify.com/post/flexible-email-alerts-for-logstash/</guid><description>
&lt;figure&gt;
&lt;a data-fancybox=&#34;&#34; href=&#34;images/logstash-logo.png&#34; &gt;
&lt;img src=&#34;images/logstash-logo.png&#34; alt=&#34;&#34; width=&#34;100&#34; &gt;&lt;/a&gt;
&lt;/figure&gt;
&lt;p&gt;My company currently does a lot of it’s debug logging via email&amp;hellip; This means that every time an unhandled exception occurs in production, qa, uat, or integration, we get an email. Thank goodness for custom email rules and single instance storage in Exchange. &lt;a href=&#34;http://farisnt.blogspot.com/2014/09/exchange-2010-2013-server-single.html&#34; target=&#34;_blank&#34;&gt;Oh wait&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I have been a proponent of &lt;a href=&#34;https://www.elastic.co/products/logstash&#34; target=&#34;_blank&#34;&gt;Logstash&lt;/a&gt; and the &lt;a href=&#34;https://www.elastic.co/webinars/introduction-elk-stack&#34; target=&#34;_blank&#34;&gt;ELK stack&lt;/a&gt; for quite a while now. It is a wonderfully flexible framework for centralizing, enriching, and viewing log data. This past week, I built a proof of concept for management and they loved it. However, many folks wanted to know how we could send out emails from the logging system. I pointed them at the &lt;a href=&#34;https://www.elastic.co/guide/en/logstash/current/plugins-outputs-email.html&#34; target=&#34;_blank&#34;&gt;Logstash email output plugin&lt;/a&gt;, but they weren’t convinced. They wanted to see some flexible routing capabilities that could be leveraged in any config file, for any log type. Thankfully, this was pretty easy to accomplish.&lt;/p&gt;
&lt;p&gt;Below I present a simple tag and filed based config for email notifications.&lt;/p&gt;
&lt;p&gt;&lt;div class=&#34;highlight&#34;&gt;&lt;pre style=&#34;color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4&#34;&gt;&lt;code class=&#34;language-plaintext&#34; data-lang=&#34;plaintext&#34;&gt;# This config is designed to flexibly send out email notifications
# It *requires* certain fields to work
# Create a tag &amp;#34;SendEmailAlert&amp;#34;
# Required field emailAlert_to - the email address to send to
# Required field emailAlert_subject - The subject of the email
# Required field emailAlert_body - The body, defaults to %message
#
output {
if &amp;#34;SendEmailAlert&amp;#34; in [tags] {
email {
address =&amp;gt; &amp;#34;smtp.XXXXX.org&amp;#34;
username =&amp;gt; &amp;#34;XXXXX&amp;#34;
password =&amp;gt; &amp;#34;XXXXX&amp;#34;
via =&amp;gt; &amp;#34;smtp&amp;#34;
from =&amp;gt; &amp;#34;logstash.alert@XXXXXX.com&amp;#34;
to =&amp;gt; &amp;#34;%{emailAlert_to}&amp;#34;
subject =&amp;gt; &amp;#34;%{emailAlert_subject}&amp;#34;
body =&amp;gt; &amp;#34;%{emailAlert_body}&amp;#34;
}
}
} &lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
As the comments indicate, all you need to do is tag a message with “SendEmailAlert” and add the appropriate fields and voila: flexible email notifications. In order to use it, a simple mutate is all that is needed.&lt;/p&gt;
&lt;p&gt;&lt;div class=&#34;highlight&#34;&gt;&lt;pre style=&#34;color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4&#34;&gt;&lt;code class=&#34;language-plaintext&#34; data-lang=&#34;plaintext&#34;&gt;mutate {
add_tag =&amp;gt; [&amp;#34;SendEmailAlert&amp;#34;]
add_field =&amp;gt; {
&amp;#34;emailAlert_to&amp;#34; =&amp;gt; &amp;#34;user@XXXXX.com&amp;#34;
&amp;#34;emailAlert_subject&amp;#34; =&amp;gt; &amp;#34;Test Alert&amp;#34;
&amp;#34;emailAlert_body&amp;#34; =&amp;gt; &amp;#34;%{message}&amp;#34;
}
}&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
We could easily extend it further, but this has been fine for our POC thus far. We have also implemented similar notifications for Hipchat and PagerDuty.&lt;/p&gt;</description></item><item><title>Searching for Superfish using PowerShell</title><link>https://development--vigilant-hodgkin-644b1e.netlify.com/post/searching-for-superfish/</link><pubDate>Thu, 19 Feb 2015 16:34:26 -0400</pubDate><guid>https://development--vigilant-hodgkin-644b1e.netlify.com/post/searching-for-superfish/</guid><description>&lt;p&gt;Lenovo installed a piece of software that could arguably be called malware or spyware. Superfish, as &lt;a href=&#34;http://arstechnica.com/security/2015/02/lenovo-pcs-ship-with-man-in-the-middle-adware-that-breaks-https-connections/&#34; target=&#34;_blank&#34;&gt;this article&lt;/a&gt; indicates, installs a self-signed root certificate that is authoritative for everything. I wanted to be sure that this issue wasn’t present on any of our Lenovo systems, so I turned to PowerShell to help.&lt;/p&gt;
&lt;p&gt;I found a copy of the certificate on Robert David Graham’s github &lt;a href=&#34;https://github.com/robertdavidgraham/pemcrack/blob/master/test.pem&#34; target=&#34;_blank&#34;&gt;here&lt;/a&gt;. I pulled the thumbprint from the cert which appears to be: ‎c864484869d41d2b0d32319c5a62f9315aaf2cbd&lt;/p&gt;
&lt;p&gt;Now, some simple PowerShell code will let you run through your local certificate store and see if you have it installed.&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre style=&#34;color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4&#34;&gt;&lt;code class=&#34;language-Powershell&#34; data-lang=&#34;Powershell&#34;&gt;Get-ChildItem -Recurse cert&lt;span style=&#34;color:#960050;background-color:#1e0010&#34;&gt;:&lt;/span&gt;\LocalMachine\ |where {$_.Thumbprint &lt;span style=&#34;color:#f92672&#34;&gt;-eq&lt;/span&gt; &lt;span style=&#34;color:#e6db74&#34;&gt;&amp;#34;c864484869d41d2b0d32319c5a62f9315aaf2cbd&amp;#34;&lt;/span&gt;}&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;You could just as easily replace the get-childitem with “Remove-Item -Path cert:\LocalMachine\root\c864484869d41d2b0d32319c5a62f9315aaf2cbd”, but I wanted to make sure the key wasn’t installed somewhere else.&lt;/p&gt;
&lt;p&gt;Now, to take it a step further, I use the AD commandlets and some more simple PowerShell to search all my systems for it.&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre style=&#34;color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4&#34;&gt;&lt;code class=&#34;language-Powershell&#34; data-lang=&#34;Powershell&#34;&gt;Import-Module ActiveDirectory
$Cred = Get-Credential
$Computers = Get-ADComputer -Filter {enabled &lt;span style=&#34;color:#f92672&#34;&gt;-eq&lt;/span&gt; $true} | select Name
&lt;span style=&#34;color:#66d9ef&#34;&gt;foreach&lt;/span&gt; ($Computer &lt;span style=&#34;color:#66d9ef&#34;&gt;in&lt;/span&gt; $Computers) {
&lt;span style=&#34;color:#66d9ef&#34;&gt;try&lt;/span&gt;{
&lt;span style=&#34;color:#66d9ef&#34;&gt;if&lt;/span&gt;(test-connection -Count 1 -ComputerName $Computer.Name){
write-output (invoke-command -ComputerName $Computer.Name -Credential $Cred -ScriptBlock {Get-ChildItem -Recurse cert&lt;span style=&#34;color:#960050;background-color:#1e0010&#34;&gt;:&lt;/span&gt;\LocalMachine\ |where {$_.Thumbprint &lt;span style=&#34;color:#f92672&#34;&gt;-eq&lt;/span&gt; &lt;span style=&#34;color:#e6db74&#34;&gt;&amp;#34;‎c864484869d41d2b0d32319c5a62f9315aaf2cbd&amp;#34;&lt;/span&gt;}})
}
}&lt;span style=&#34;color:#66d9ef&#34;&gt;catch&lt;/span&gt;{
Write-Error (&lt;span style=&#34;color:#e6db74&#34;&gt;&amp;#34;There was an issue connecting to computer $Computer : &amp;#34;&lt;/span&gt; + $_.Exception)
}
}&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Is it perfect? No. But it gets the job done in relatively short order.&lt;/p&gt;</description></item><item><title>Intro To Chocolatey at NJLOPSA</title><link>https://development--vigilant-hodgkin-644b1e.netlify.com/talk/intro-to-chocolatey-njlopsa/</link><pubDate>Thu, 04 Dec 2014 16:32:01 -0400</pubDate><guid>https://development--vigilant-hodgkin-644b1e.netlify.com/talk/intro-to-chocolatey-njlopsa/</guid><description>&lt;p&gt;I will be giving a presentation on Chocolatey, a Windows package manager, tonight at the &lt;a href=&#34;http://www.meetup.com/LOPSA-NJ/&#34; target=&#34;_blank&#34;&gt;New Jersey League of Professional Systems Administrators&lt;/a&gt; meetup. It is being held at the &lt;a href=&#34;http://maps.google.com/maps?f=q&amp;amp;hl=en&amp;amp;q=2751+Brunswick+Pike%2C+Lawrenceville%2C+NJ%2C+us&#34; target=&#34;_blank&#34;&gt;Lawrence Headquarters Branch of the Mercer County Library&lt;/a&gt;, 2751 Brunswick Pike, Lawrenceville, NJ. Come by and get some cake, meet some folks, and learn about a great tool!
For more details and to register, head over to the meetup: &lt;a href=&#34;http://www.meetup.com/LOPSA-NJ/events/218257852/&#34; target=&#34;_blank&#34;&gt;http://www.meetup.com/LOPSA-NJ/events/218257852/&lt;/a&gt;&lt;/p&gt;</description></item><item><title>A Hundred Domains and SHA-1 Depreciation</title><link>https://development--vigilant-hodgkin-644b1e.netlify.com/post/hundred-domains-sha1-deprication/</link><pubDate>Wed, 17 Sep 2014 16:27:17 -0400</pubDate><guid>https://development--vigilant-hodgkin-644b1e.netlify.com/post/hundred-domains-sha1-deprication/</guid><description>&lt;p&gt;Apparently I’ve been living under a rock for a while, because I didn’t know that SHA-1 was being phased out in the immediate future. Thank you, GoDaddy, for notifying me with a month and change to spare. As it turns out, Google will no longer be trusting certain SHA-1 signed SSL certificates with the release of Chrome 39, which is set for November. For details, see the following links.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;http://blog.chromium.org/2014/09/gradually-sunsetting-sha-1.html&#34; target=&#34;_blank&#34;&gt;Gradually Sunsetting SHA-1&lt;/a&gt; (Google)&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;http://blogs.technet.com/b/pki/archive/2013/11/12/sha1-deprecation-policy.aspx&#34; target=&#34;_blank&#34;&gt;SHA1 Deprecation Policy&lt;/a&gt; (Microsoft)&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://blog.mozilla.org/security/2014/09/08/phasing-out-certificates-with-1024-bit-rsa-keys/&#34; target=&#34;_blank&#34;&gt;Phasing out Certificates with 1024-bit RSA Keys&lt;/a&gt; (Mozilla)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Due to the fact that our clients often purchase their own SSL certificates, we have no internal records to check what algorithm was used to sign the certificates in use. So now we get to audit slightly over 100 domains to check and see what signature algorithm is in use. We could browse to each domain manually and take a look at their certificate but that would just take way too long. There were some web based tools around that could do it, but they also only worked on one site at a time.&lt;/p&gt;
&lt;p&gt;So, instead, I looked to PowerShell to see what could be done… Unfortunately, there was no native cmdlet to do anything like this! I did find a module that had a lot of great PKI-related functionality, the &lt;a href=&#34;https://pspki.codeplex.com/wikipage?title=Test-WebServerSSL&#34; target=&#34;_blank&#34;&gt;Public Key Infrastructure PowerShell&lt;/a&gt; module, but it, too, didn’t have the much-needed signature algorithm. However, it did provide a very robust base on which to build. Below is the solution I came up with.&lt;/p&gt;
&lt;p&gt;&lt;div class=&#34;highlight&#34;&gt;&lt;pre style=&#34;color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4&#34;&gt;&lt;code class=&#34;language-Powershell&#34; data-lang=&#34;Powershell&#34;&gt;&lt;span style=&#34;color:#66d9ef&#34;&gt;function&lt;/span&gt; get-SSLSigningAlgorithm {
[&lt;span style=&#34;color:#66d9ef&#34;&gt;CmdletBinding&lt;/span&gt;()]
&lt;span style=&#34;color:#66d9ef&#34;&gt;param&lt;/span&gt;(
[&lt;span style=&#34;color:#66d9ef&#34;&gt;Parameter&lt;/span&gt;(&lt;span style=&#34;color:#66d9ef&#34;&gt;Mandatory&lt;/span&gt; = $true, &lt;span style=&#34;color:#66d9ef&#34;&gt;ValueFromPipeline&lt;/span&gt; = $true, &lt;span style=&#34;color:#66d9ef&#34;&gt;Position&lt;/span&gt; = 0)]
&lt;span style=&#34;color:#66d9ef&#34;&gt;[string]&lt;/span&gt;$URL,
[&lt;span style=&#34;color:#66d9ef&#34;&gt;Parameter&lt;/span&gt;(&lt;span style=&#34;color:#66d9ef&#34;&gt;Position&lt;/span&gt; = 1)]
[&lt;span style=&#34;color:#66d9ef&#34;&gt;ValidateRange&lt;/span&gt;(1,65535)]
&lt;span style=&#34;color:#66d9ef&#34;&gt;[int]&lt;/span&gt;$Port = 443,
[&lt;span style=&#34;color:#66d9ef&#34;&gt;Parameter&lt;/span&gt;(&lt;span style=&#34;color:#66d9ef&#34;&gt;Position&lt;/span&gt; = 2)]
&lt;span style=&#34;color:#66d9ef&#34;&gt;[Net.WebProxy]&lt;/span&gt;$Proxy,
[&lt;span style=&#34;color:#66d9ef&#34;&gt;Parameter&lt;/span&gt;(&lt;span style=&#34;color:#66d9ef&#34;&gt;Position&lt;/span&gt; = 3)]
&lt;span style=&#34;color:#66d9ef&#34;&gt;[int]&lt;/span&gt;$Timeout = 15000,
&lt;span style=&#34;color:#66d9ef&#34;&gt;[switch]&lt;/span&gt;$UseUserContext
)
$ConnectString = &lt;span style=&#34;color:#e6db74&#34;&gt;&amp;#34;https://$url`:$port&amp;#34;&lt;/span&gt;
$WebRequest = &lt;span style=&#34;color:#66d9ef&#34;&gt;[Net.WebRequest]&lt;/span&gt;::Create($ConnectString)
$WebRequest.Proxy = $Proxy
$WebRequest.Credentials = $null
$WebRequest.Timeout = $Timeout
$WebRequest.AllowAutoRedirect = $true
&lt;span style=&#34;color:#66d9ef&#34;&gt;[Net.ServicePointManager]&lt;/span&gt;::ServerCertificateValidationCallback = {$true}
&lt;span style=&#34;color:#66d9ef&#34;&gt;try&lt;/span&gt; {$Response = $WebRequest.GetResponse()}
&lt;span style=&#34;color:#66d9ef&#34;&gt;catch&lt;/span&gt; {}
&lt;span style=&#34;color:#66d9ef&#34;&gt;if&lt;/span&gt; ($WebRequest.ServicePoint.Certificate &lt;span style=&#34;color:#f92672&#34;&gt;-ne&lt;/span&gt; $null) {
$Cert = &lt;span style=&#34;color:#66d9ef&#34;&gt;[Security.Cryptography.X509Certificates.X509Certificate2]&lt;/span&gt;$WebRequest.ServicePoint.Certificate.Handle
write-host $Cert.SignatureAlgorithm.FriendlyName;
} &lt;span style=&#34;color:#66d9ef&#34;&gt;else&lt;/span&gt; {
Write-Error $Error[0]
}
}&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
I’ll create a CSV of the domains that I need to check, and iterate over them in a for-each loop. That function will be used within the loop to check the sites, and the output will go into another CSV. We’ll use that to plan our re-keying.&lt;/p&gt;</description></item><item><title>Hide Disabled AD Accounts from the GAL using Powershell</title><link>https://development--vigilant-hodgkin-644b1e.netlify.com/post/hide-disabled-accounts-from-gal/</link><pubDate>Mon, 08 Sep 2014 16:23:48 -0400</pubDate><guid>https://development--vigilant-hodgkin-644b1e.netlify.com/post/hide-disabled-accounts-from-gal/</guid><description>&lt;p&gt;Our account decommission process involves disabling a user and moving them to a “Disabled Domain Accounts” OU. Well, it turns out that our previous admin never actually hid these mailboxes from the Global Address List (GAL), so many of our offshore partners have still been sending emails to them. I decided to start cleaning this up a bit today with the following:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre style=&#34;color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4&#34;&gt;&lt;code class=&#34;language-Powershell&#34; data-lang=&#34;Powershell&#34;&gt;Search-ADAccount -SearchBase &lt;span style=&#34;color:#e6db74&#34;&gt;&amp;#34;ou=Disabled Domain Accounts,dc=example,dc=local&amp;#34;&lt;/span&gt; -AccountDisabled -UsersOnly |Set-ADUser &lt;span style=&#34;color:#f92672&#34;&gt;-Replace&lt;/span&gt; @{msExchHideFromAddressLists=$true}&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Another simple bit of PowerShell. The first command searches within the disabled account OU, and looks for disabled user accounts only. That output is piped into the second command which replaces the Exchange attribute that hides that account from the GAL.&lt;/p&gt;</description></item><item><title>How to clear all Workstation DNS caches from PowerShell</title><link>https://development--vigilant-hodgkin-644b1e.netlify.com/post/clear-dns-caches-powershell/</link><pubDate>Thu, 04 Sep 2014 16:20:07 -0400</pubDate><guid>https://development--vigilant-hodgkin-644b1e.netlify.com/post/clear-dns-caches-powershell/</guid><description>&lt;p&gt;I recently found myself in need of the ability to clear the DNS cache of all the laptops in my company. I found a very powerful and simple way to do so and thought I would share.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#34;language-powershell&#34;&gt;$c = Get-ADComputer -Filter {operatingsystem -notlike &amp;quot;*server*&amp;quot; }
Invoke-Command -cn $c.name -SCRIPT { ipconfig /flushdns }
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The first line queries Active Directory for all computers that are not servers. The second line simply invokes the normal windows command “ipconfig /flushdns” on all computers.&lt;/p&gt;
&lt;p&gt;This technique could be used to run any command across all workstations. Very powerful, and dangerous. Use at your own risk!&lt;/p&gt;</description></item><item><title>Expired Ad Users and Powershell</title><link>https://development--vigilant-hodgkin-644b1e.netlify.com/post/expired-ad-users-and-powershell/</link><pubDate>Mon, 02 Jun 2014 16:14:25 -0400</pubDate><guid>https://development--vigilant-hodgkin-644b1e.netlify.com/post/expired-ad-users-and-powershell/</guid><description>
&lt;h2 id=&#34;the-setup&#34;&gt;The Setup&lt;/h2&gt;
&lt;p&gt;I came into the office today and was bombarded with users not being able to access our TFS server. Now, before I get too far into this story, you have to understand: Technically I’m only responsible for client-facing infrastructure. However, over the years I’ve started wearing more of a devops hat because, apparently, I’m quite good at it. That means TFS is now largely my problem. Funny how that works, eh? Anyway, back to TFS.&lt;/p&gt;
&lt;p&gt;There were a few odd things about this issue: the oddest being that some of our off-shore developers were having no problems and others just couldn’t get in. The users with issues also couldn’t access the web portal. We (at least me) hadn’t made any changes to TFS in about a month, so I started to investigate.&lt;/p&gt;
&lt;p&gt;After a brief panic about SharePoint not being installed properly (Hey, I didn’t set up this system, I’m just its current keeper) I managed to trace the issue to network logons. Thank you Security log! Wait, what’s this? Turns out many, many users recently had their accounts marked as expired… Turns out we just implemented mandatory password rotation and guess what? Today – 90 days was the day that a large batch of offshore development accounts were created! So now I had to reset credentials on 35+ accounts, and I’ll be damned if I’m going to do that manually!&lt;/p&gt;
&lt;p&gt;Enter PowerShell!&lt;/p&gt;
&lt;h2 id=&#34;list-all-accounts-in-an-ou-that-have-expired-passwords&#34;&gt;List all accounts in an OU that have expired passwords&lt;/h2&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre style=&#34;color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4&#34;&gt;&lt;code class=&#34;language-Powershell&#34; data-lang=&#34;Powershell&#34;&gt;Get-ADUser -searchbase &lt;span style=&#34;color:#e6db74&#34;&gt;&amp;#34;ou=contractors,dc=example,dc=com&amp;#34;&lt;/span&gt; -filter {Enabled &lt;span style=&#34;color:#f92672&#34;&gt;-eq&lt;/span&gt; $True} -Prop PasswordExpired | Where {$_.PasswordExpired } |select-object -property SAMAccountName,Name,PasswordExpired |format-table&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;h3 id=&#34;get-aduser&#34;&gt;Get-ADUser&lt;/h3&gt;
&lt;p&gt;SearchBase tells the Get-ADUser command to limit the search to a specific OU. This is very handy since I only have admin access to the one OU anyway. I filtered only for enabled accounts since trying to filter on PasswordExpired here didn’t work for some reason. I also explicitly called out the PasswordExpired property. This output was piped to the where-object commandlet.&lt;/p&gt;
&lt;h3 id=&#34;where-object&#34;&gt;Where-Object&lt;/h3&gt;
&lt;p&gt;This was where I filtered on the current object group. Since passwordExpired is a bool, no fanciness needed here. Then I piped the output to Select-Object.&lt;/p&gt;
&lt;h3 id=&#34;select-object&#34;&gt;Select-Object&lt;/h3&gt;
&lt;p&gt;I only cared about some specific data for the output. I used this to select the properties I needed. Finally, I piped to Format-Table to make everything display nicely.&lt;/p&gt;
&lt;h2 id=&#34;reset-passwords-for-accounts-in-an-ou-with-expired-passwords&#34;&gt;Reset passwords for accounts in an OU with expired passwords&lt;/h2&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre style=&#34;color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4&#34;&gt;&lt;code class=&#34;language-Powershell&#34; data-lang=&#34;Powershell&#34;&gt;Get-ADUser -searchbase &lt;span style=&#34;color:#e6db74&#34;&gt;&amp;#34;ou=contractors,dc=example,dc=com&amp;#34;&lt;/span&gt; -filter {Enabled &lt;span style=&#34;color:#f92672&#34;&gt;-eq&lt;/span&gt; $True} -Prop PasswordExpired | Where {$_.PasswordExpired } | &lt;span style=&#34;color:#66d9ef&#34;&gt;ForEach&lt;/span&gt;-Object {Set-ADAccountPassword -Identity $_.SAMAccountName -NewPassword (ConvertTo-SecureString -AsPlainText &lt;span style=&#34;color:#e6db74&#34;&gt;&amp;#34;Changeme1&amp;#34;&lt;/span&gt; -Force) }&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;h3 id=&#34;get-aduser-where-object&#34;&gt;Get-ADUser &amp;amp; Where-Object&lt;/h3&gt;
&lt;p&gt;These are the same as in the section above. We are filtering for enabled accounts in the contractors OU. This was piped to one of my favorite commands on earth: ForEach-Object.&lt;/p&gt;
&lt;h3 id=&#34;foreach-object&#34;&gt;ForEach-Object&lt;/h3&gt;
&lt;p&gt;This is, hands down, one of the handiest commands in PowerShell. Or any language for that matter. In this particular instance, we are running the Set-ADAccountPassword option for each object that we pass in. We pass the object’s SAMAccountName as the identity. We then create a new secure string password and pass that to -NewPassword. Then you hit enter and the magic runs!&lt;/p&gt;</description></item><item><title>Monitoring and Caching Dns</title><link>https://development--vigilant-hodgkin-644b1e.netlify.com/post/monitoring-and-caching-dns/</link><pubDate>Thu, 20 Jun 2013 15:18:48 -0400</pubDate><guid>https://development--vigilant-hodgkin-644b1e.netlify.com/post/monitoring-and-caching-dns/</guid><description>
&lt;figure&gt;
&lt;a data-fancybox=&#34;&#34; href=&#34;images/rabbit-hole-150.jpg&#34; &gt;
&lt;img src=&#34;images/rabbit-hole-150.jpg&#34; alt=&#34;&#34; &gt;&lt;/a&gt;
&lt;/figure&gt;
&lt;p&gt;Had an interesting issue today. One of the production systems suddenly went dark, and we found out about it from the client. This is never a good way to start a Thursday. It turns out that the client was having DNS issues and the domain was no longer valid. Relatively simple fix, crisis averted…&lt;/p&gt;
&lt;p&gt;But why didn’t the monitoring system pick it up?&lt;/p&gt;
&lt;p&gt;We use &lt;a href=&#34;Dotcom-Monitor&#34; target=&#34;_blank&#34;&gt;Dotcom-Monitor&lt;/a&gt; to check each of our sites on a regular basis. The monitor actually logs in to each website to verify functionality. What in the DNS world could cause this issue in such a scenario? How about a caching nameserver? Turns out, to limit the stress on their nameserver, Dotcom Monitor set up a standard caching nameserver that keeps a record in cache until the record expires. So even though DNS was no longer working for this site, the monitor thought everything was A-OK.&lt;/p&gt;
&lt;p&gt;What can we do to fix this issue? Not much unfortunately. Dotcom Monitor will have to implement a change in their infrastructure which will likely increase the load on their DNS servers significantly. Since that’s not likely, it looks like I’ll have to build a service into our internal monitor (&lt;a href=&#34;http://www.zabbix.com/&#34; target=&#34;_blank&#34;&gt;Zabbix&lt;/a&gt; based) to check for the domain against the SOA for it.&lt;/p&gt;</description></item><item><title>Page Speed Score Wordpress</title><link>https://development--vigilant-hodgkin-644b1e.netlify.com/post/page-speed-score-wordpress/</link><pubDate>Wed, 12 Jun 2013 15:11:52 -0400</pubDate><guid>https://development--vigilant-hodgkin-644b1e.netlify.com/post/page-speed-score-wordpress/</guid><description>
&lt;figure&gt;
&lt;a data-fancybox=&#34;&#34; href=&#34;images/screenshot.jpg&#34; &gt;
&lt;img src=&#34;images/screenshot.jpg&#34; alt=&#34;&#34; width=&#34;250&#34; &gt;&lt;/a&gt;
&lt;/figure&gt;
&lt;p&gt;After configuring W3 Total Cache and playing around with google’s free PageSpeed Insights tool, I was able to increase The End of the Tunnel’s score from 49 to 96! This is impressive to me because this site currently runs on the basic DreamHost shared environment plan. No dedicated servers, no fancy configurations, just good cache management. Fantastic!&lt;/p&gt;</description></item><item><title>Flush Dns Cache for Single Domain</title><link>https://development--vigilant-hodgkin-644b1e.netlify.com/post/flush-dns-cache-for-single-domain/</link><pubDate>Tue, 11 Jun 2013 15:05:15 -0400</pubDate><guid>https://development--vigilant-hodgkin-644b1e.netlify.com/post/flush-dns-cache-for-single-domain/</guid><description>&lt;p&gt;I was working on the site today and ran into an issue: Our caching DNS server (Windows 2008) was holding on to the old webserver’s IP. This wasn’t a problem for me locally as I used the old hosts file trick to point to the new server. However, this meant I couldn’t show other folks the site until either the cache was completely flushed or the record expired.&lt;/p&gt;
&lt;p&gt;A little googling later, and I found this little command from &lt;a href=&#34;ServerFault&#34; target=&#34;_blank&#34;&gt;ServerFault&lt;/a&gt;.&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre style=&#34;color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4&#34;&gt;&lt;code class=&#34;language-shell&#34; data-lang=&#34;shell&#34;&gt;dnscmd dnsserver.local /NodeDelete ..Cache whatever.com &lt;span style=&#34;color:#f92672&#34;&gt;[&lt;/span&gt;/Tree&lt;span style=&#34;color:#f92672&#34;&gt;]&lt;/span&gt; &lt;span style=&#34;color:#f92672&#34;&gt;[&lt;/span&gt;/f&lt;span style=&#34;color:#f92672&#34;&gt;]&lt;/span&gt;
/tree Specifies to delete all of the child records.
/f Executes the command without asking &lt;span style=&#34;color:#66d9ef&#34;&gt;for&lt;/span&gt; confirmation.&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;This allows you to clear just a small portion of the cache, as you define it. Pretty handy!&lt;/p&gt;</description></item></channel></rss>