TL;DR Shutoff “Priority enabled and VLAN enabled” in device manager, set it to just “Priority Enabled”
So I switched to VirtualBox 5.1 and so far, so good. The one issue I was having was running a trunk with an untagged VLAN and a tagged VLAN for my guest. I doublechecked my tagging on the switch, I doublechecked my tagging in Fedora, but it never worked in Windows 10 for the tagged VLAN, only the untagged VLAN. Finally, and I don’t understand why, but I went into the Windows 10 Network settings for the actual NIC in device manager and changed “Priority enabled and VLAN enabled” to just “Priority Enabled” and it works like a champ. It appears that Windows ability with the Intel Pro card and the way VirtualBox will tag and untag packets for you is incompatible with the Windows 10 driver even though Windows isn’t managing the VLAN functionality.
So if you haven’t tried it yet, I strongly suggest giving Vivaldi a try. It combines the base code of Chromium without the Chrome features you may not want with an interesting user interface. To me, this is the best setup:
- Chrome/Chromium underpinnings, so you don’t have website compatibility issues (Think Opera, a good browser, but odd support for online banking and other strict websites. Yes, I know you can spoof the user agent, but the fact you have to do that doesn’t help non-power users, IMO.)
- The user interface is fun, some of my favorites:
- Mouse Gestures
- Adaptive user interface
For Fedora users, the Vivald RPM from their website does install easy enough, but you can’t play Vines, Twitter Video, etc. because you don’t have functional ffmpeg support nor Flash if you use SiriusXM or other Flash only websites. So, here’s a quick fix for you:
The Long Way
For Vine/Twitter Video support, the Vivaldi RPM comes with a libffmpeg.so located in /opt/vivaldi/lib , but it doesn’t run have support for h264/mp4 due to licensing restrictions. What I did next was fire up a VM with Linux Mint and built chromium-codecs-ffmpeg-extra . From there, I took libffmpeg.so and copied it to my Fedora install at /opt/vivaldi/lib after backing up the stock libffmpeg.so . Double check that your standard user can read the plugin by:
# chmod 644 libfmpeg.so
For Flash Player, I had Chrome installed on the Linux Mint VM, so I just copied the directory /opt/google/chrome/PepperFlash to my Fedora machine and:
# chmod -R 644 /opt/google/chrome/PepperFlash
Then I restarted Vivaldi and had up-to-date Flash and working H.264/MP4 support. Because I had the VM, it literally took me minutes to build, copy, and run these updates. Still, I’d love a repo from RPMFusion or someone that is really trustworthy, but that won’t happen with h.264/MP4 it appears.
The Easy Way
You are welcomed to use my plugin I built for libffmpeg.so and Flash, I realize not everyone is as paranoid as I am or just want to run it for testing purposes in a VM, etc. I do not believe I’m violating any distribution rights from Adobe for the Flash Player since it is the Pepper plugin as opposed to Linux 11.2 version. I’m sure Adobe will let me know otherwise… Save libffmpeg.so to /opt/vivaldi/lib and libpepflashplayer.so to /opt/google/chrome/PepperFlash, enjoy!
libffmpeg.so built on August 12th 2016
libpepflashplayer.so version 22.214.171.124
So I run Fedora 23 at home and one of my VMs was running Windows 10, but the sound was awful. The sound would have a horrible echo and “scratchy” sound, sometimes after it would play for a bit, it would “fix itself”. I tried the following:
- Different drivers in the .vmx file including sb16 (didn’t work at all)
- Tried Windows 7 to see if it was a Windows 10 issue (nope the issue happened with any version of Windows)
- Issue didn’t happen with virtual Linux guests
- Fresh Windows 7, then 10 install (still had the issue by default
- Tried the fix from VMWare for audio with the speaker output (didn’t fix it, in fact, 24bit made it worse)
Long story short, I finally found this thread that worked for me. It just installed a legacy adapter in Windows 10 and it worked perfect!! As long as I was in the .vmx, I installed the VMXNET3 driver by changing the network adapter from “E1000” to “vmxnet3” for better performance!
So Amazon requires their EC2 machines to use private IPs, regardless if you have an elastic IP. The guys at Directadmin have a nice guide to help you setup, but making it work on CentOS requires a little help that I found on the amazon forums, but in case it goes away
$ cat /etc/sysconfig/network-scripts/ifcfg-eth0\:1
$ cat /etc/sysconfig/network-scripts/route-eth0\:1
default via 10.0.0.1 dev eth0:1 table main
10.0.0.0/24 dev eth0:1 src 10.0.x.x table main
$ cat /etc/sysconfig/network-scripts/rule-eth0\:1
from 10.0.x.x/32 table main
to 10.0.x.x/32 table main
So following this guide from the folks at Zimbra, it has most of what you need to migrate, I’m going to add some extra steps that I found to be helpful during the migration as it was botched in testing a couple times. Migrating to AWS adds a few surprises to the migration
- Absolutely obsess about getting your hostname, hosts files, and anything to do with DNS. If you reboot, the AWS cloud config will nuke your settings, so make sure when you make your changes, which are normally simple changes for a Linux Admin, reboot and verify.
- Before you make your changes to the settings above, go into /etc/cloud/cloud.cfg and comment, using # , the cloud init modules called:
- Before you make your changes to the settings above, go into /etc/cloud/cloud.cfg and comment, using # , the cloud init modules called:
- The steps on copying the SSL certificates are a little rough, I went ahead and rsynced the /opt/zimbra/ssl folder right on over. I had all sorts of issues with the server.key not matching and it was much easier to do this and let Zimbra fix the perms with the utility at the end.
Otherwise, make sure your elastic IP from amazon isn’t on a blacklist already with Spamhaus. I’m a fan of mxtoolbox to check the blacklists for. Finally, don’t forget your RevDNS setup for your static IP, you can just click here and submit your information to Amazon. This is also the same spot to ask to get off of Spamhaus, if your IP is on a list, expect up to a week to get it resolved, though.
- Do not download from the sslvpn.html page of your VPN appliance, it won’t have all the steps for the Linux side of the house.
- Do download the CRT, PEM, and CA files from your Windows or Mac SSLVPN client installation. For Windows, it is found in “%Appdata%\Watchguard\Mobile VPN” and grab the following to copy over to your Linux installation:
- If you are using SELinux, you must copy the files from step 2 into ~/.cert or SELinux will whine and stop your connection as the certificates can’t lay around your home folder without intervention not covered here.
- Setup an openvpn client using the following settings:
- gateway = your pick
- Connection type = X.509 with password
- CA file = ca.crt
- Certificate = client.crt
- key = client.pem
- Key password = <unneeded>
- Username and password is your setup
- While setting up the connection, you need to tweak the settings by clicking on “Advanced” which is in the screen from step 3
- Gateway port = 443
- Tunnel and UDP fragment size = Automatic
- Check “Use custom reneotiation interval” = 36060 (default from Watchguard)
- Check “Use TCP Connection” as this is a SSLVPN on TCP 443
- On the Security tab, your cipher should be AES-256-CBC and the HMAC Authentication should be SHA-1
That’s it, the connection will fire right up and run without further settings. Enjoy!
So at one particular company, they use Hyper-V (on 2012 R1) to drive their virtualization platform. I used to have problems with Hyper-V since it had poor Linux and BSD support, but that is coming along now. Major Linux distros are embraced by Microsoft and the Linux Kernel has support, but the tools to convert are lagging. A little Googling will find that there are some guides using outdated tools that aren’t made by Microsoft. So here is an easier way to do that and save your day:
1. Download Clonezilla and copy the Physical Linux computer to an img file on an external drive of some sort. No special options are required if you do disk to image.
2. Create a DYNAMIC partition on Hyper-V that is big enough to absorb the source partition. If the machine was a 500GB machine, make a 550GB partition and thin provision it.
3. Restore the image file by booting up Clonezilla from the guest host you build in Hyper-V.
4. When done, download GParted and shrink the partition down to whatever size you want. If you were only using 200GB from my example above, you can shrink down to 300GB if you would like.
5. With the guest machine off, make sure your drive in Hyper-V is a vhdx format, not vhd. If it is vhd, convert it to vhdx and then, only then, can you shrink the virtual disk down to your Gparted size.
6. As of this posting, Linux and Hyper-V can’t get along with dynamic MAC addresses for nic adapters, set a static MAC to your NICs in your guest linux machine, be ready to setup your nics again on the distro of your choice. Also, do not use Time-Sync from the hyper-v vitualization tools, as of this posting, it isn’t the most stable and I use ntpupdate rather than tweaking config files, time.nist.gov is a great NTP server to use.
That’s really it, I’m running Kali Linux and CentOS on a Hyper-V advanced cluster
So I had a hard time setting up PFSense, which is a good, open source firewall, if you put the time into it. In fact, I’ve used it in critical environments when the ability to get a high end Watchguard or “other” firewall wasn’t an option and have enjoyed its performance, but that’s one guy’s opinion. Regardless, here’s my project, I hope it helps you out!
– Use PFSense
– Create a public DMZ where I could continue to use my /29 network for servers. I wanted to host mail, web, and more while running an IPS (Snort) with it.
– All of this is done through virtual switching in VMWare, though would work on a physical switch, too.
So, the basic pfsense setup won’t be covered, but here’s what I did after that. On the VMWare box that was hosting the PFSense, I had a WAN switch, LAN switch, and a DMZ switch, I created one adapter for each switch and bound it to my pfsense box. The LAN will not be discussed going forward in this article, but nothing special is needed beside rules to allow DMZ –> LAN and LAN –> DMZ. Also, when creating the DMZ adapter, make sure you choose “None” for the IPv4 configuration. You need to remember that this is quietly in the middle of the process and that can be confusing for some newer network engineers. You don’t need an IP address to be a firewall, you just need to be able to stop the packet from continuing. Here’s a really important part for the VMWare users who are using virtual switches, put the VMWare switch in to promiscuous mode on the WAN and DMZ adapters, otherwise the pfsense box will never see all of the traffic it needs or allow traffic in, but not out or vise versa depending on what is promiscuous and what isn’t.
From there, put your DMZ and WAN into a bridge… This seems a little confusing at first in pfsense, but you need to think about it this way. Your public devices will be in the DMZ and will be in theory connected to the WAN port, but the pfsense box is sitting in between the DMZ and WAN as a chokepoint between the traffic, think of an hourglass passing sand between the two chambers.
From there, if you create WAN rules pointing from Any –> Public IP, it will control the traffic from passing through due to that hourglass effect. You do not need to NAT anything because, after all, this is a public IP and you bridged the adapters together. Just open all the ports you need, but don’t forget to create an outbound rule from the DMZ allowing your traffic out, too. May I suggest that you just create an allow all from the public ip in the dmz to any for basic t-shooting before you lock everything down for good egress filtering.Once you have it working, you can enable Snort or traffic shaping to get the most out of your bandwidth. Good times!
So, some Q&A that is bound to come up:
Q. I read on a ton of sites that a Virtual IP is needed, why not here?
A. Virtual IPs make sense when you are NATing the traffic into the firewall rather than setting up a true DMZ. I think what is lost from training on newer network guys is the difference between NATing and Port Forwarding vs. a public DMZ. NATing makes sense for a home network where you want to keep that “server” or device on the same network as your LAN, but for the bad guys, putting your public facing servers on your LAN with port forwarding and NATing makes it easier to own the network. If they can breach the server in the LAN that is part of the port forwarding and NATing setup you would have, then they can attack the LAN. With the public DMZ, they are stuck there and have to go another hop to get to the LAN. Back to the original question, if PFSense needs to answer on behalf of the device hiding back in the LAN, the Virtual IP will tell pfsense to field requests to that IP, which isn’t what you want to do, you want your public server to do that.
Q. Is the WAN IP of the pfsense box the gateway for my public device that is in the DMZ or is the upstream router?
A. It’s the upstream router, because remember, the bridge is the chokepoint for traffic and the management will happen there. If you forward to the WAN IP of the pfsense box, then you are introducing an extra hop because the pfsense box is just going to push it up to the upstream router. Then when packets come back to your public device, the upstream router will just go the public IP anyway. The filtering of your traffic will be done quietly and without IPs, so it doesn’t matter where the gateway is because the pfsense is between your public IP host and the default gateway.
So I was banging my head on a client’s machine the other day with a unique error. IE8 was in a processor loop with iertutil.dll , which I observed using Process Monitor. I tried the normal repairs of addons being disabled, reinstall IE8, Flash, Java, etc. Finally, I found what the problem was. See, the issue only occurred on a page or two and not most pages, that should have been a clue. The user accidentally turned on “Compatibility Mode” for finance.yahoo.com and whatever code Yahoo through at it through Compatibility Mode caused the loop, just disable it and save yourself the 1hr plus I put into it 🙂
So with a client, I was hit with an error in Acronis Backup & Recovery 11 that caused these errors:
ProtectCommand: Failed to execute the command.
Error code: 41
Fields: $module : agent_protection_addon_glx_pic
Message: ProtectCommand: Failed to execute the command.
Error code: 53
Fields: $module : agent_protection_addon_glx_pic
Message: TOL cumulative completion result.
The issue with Acronis is that, well, it’s just a bad product now… I used to be a big Acronis fan, but when products break on their own, literally 7 times, on 3 clients without any changes to the server nor backup routine, it’s a bad product. What happens is the XML files and lock files become corrupt, it won’t talk to that folder anymore because IT BROKE ITSELF!!!! The best solution is to abandon the backup folder and create a new one, then it doesn’t have to work with the old XML and lock files, then it fires right up and works.
I have never been able to fix Acronis backup jobs that roll over and die, nor has the tech support from Acronis that has logged over 10 man hours in these client machines, they really have no answer why and they just rebuild the backup job from scratch and put it in a new folder, that’s the support for their software… It’s like the PC repair guys at the local town shop that believe in this formula
Customer + computer that has an error that isn’t related to a simple setting = reformat and charge for a system setup
I fully believe that Acronis has a similar banner hanging in the tech support office… /rant
So working on a customer’s FreeBSD server this month and being a good admin, I made sure I checked the /usr/ports/UPDATING message for anything of interest. Lo and behold, the following message:
AUTHOR: [email protected]
The FreeBSD ports tree switched from CVS to Subversion. A Subversion
to CVS exporter is in place to continue the support of CVSup.
Sure enough, I started researching the change and found that there wasn’t a great guide out there (yet), so I have this for all of you:
Changing over to SVN for updating your ports:
1. You’ll want to get SVN installed as root….
cd /usr/ports/devel/subversion && make install clean && rehash
(you can run with the defaults in the config screen)
2. Now I find that deleting the old ports tree that I built over time using csup cleans up any garbage that can be in there (old distfiles you forgot to clean, INDEX-*, and more). Then I do the following….
rm -rf /usr/ports/ && mkdir /usr/ports && rehash
Keep in mind with root permissions already with you, that the ports directory will automatically be built with the correct permissions. I through in a rehash because the system hated me twice, on two different servers, for killing the /usr/ports directory and recreating it… the rehash wakes up the system to the change.
3. From here, and this is where I found the instructions distracting… If you are reading this, you probably aren’t a developer, in fact, you just want a fresh copy of the ports tree so you can run portupgrade or whatever method you like to use. The directions that I found out there require logins and more, but that’s because “you are a developer” from the perspective of the authors…. Our lives our easier than those instructions by using….
svn co svn://svn.freebsd.org/ports/head /usr/ports
The co is just “copy” and from there, I would do a portupgrade -ar , which will rebuild that INDEX-* file in /usr/ports and correctly continue from where it should as if you did a csup.
4. So how do you update for new ports???? Simple…
Now, what about updating the /usr/src… I normally don’t do that, I leave that to freebsd-update fetch and freebsd-update install , but if you HAD To rebuild it, it would be:
rm -rf /usr/src/ && mkdir /usr/ports
svn co svn://svn.freebsd.org/base/release/8.2.0 /usr/src (or whatever release you wanted)
Though there isn’t a huge reason for us end users to change, IMO, it is way easier for the developers to make changes and slip in updates, plus, on our end we do enjoy a faster process than csup does for updates.