Incrementally Migrating from Apache to nginx

This is an old post!

This post is over 2 years old. Solutions referenced in this article may no longer be valid. Please consider this when utilizing any information referenced here.

I am currently in the process of migrating a bunch of sites on this machine from Apache to nginx. Rather than take everything down and migrate it all at once, I wanted to do this incrementally. But that raises a question: how do you incrementally migrate site configs from one to the other on the same machine, since both servers will need to be running and listening on ports 80 and 443?

The solution I came up with was to move Apache to different ports (8080 and 4443) and to set the default nginx config to be a reverse proxy!

First thing is to run a sed command to batch change all the port 80s in virtual host configs to 8080s.

$ sed -i -- 's/:80/:8080/g' *.conf

Then, edit /etc/apache2/ports.conf and changed the Listen directives from 80 to 8080. Then restart. You should now be able to hit all your existing sites on port 8080.

Next, install nginx.

$ sudo apt install nginx

Then, edit the /etc/sites-enabled/default file. Start by removing all the contents.

From there, it’s like any standard nginx reverse proxy out there with one small difference: you have to pass the Host header back to Apache so that it can do the name-based virtual host lookups.

server {
    listen 80 default_server;
    proxy_pass http://127.0.0.1:8080;
    proxy_set_header Host $host;
}

Restart nginx, and any unsecured sites you have should now be working.

Becuse this is the default config, adding more specific configs will override this, allowing you to start moving your configs over. nginx will use the local config if it exists, or pass it to apache if it doesn’t.

But What About HTTPS?

For HTTPS, things are more complex.

Basically what we are doing now is creating a straight up TCP reverse proxy that will pass SSL through from Apache to the client, so we will need to leverage nginx’s stream support. But, we need to check to be sure that we can’t serve it natively from nginx before passing it off to Apache.

First, similar to above, convert all your old Apache configs to listen on 4443:

$ sed -i -- 's/:443/:4443/g' *.conf

Then, edit /etc/apache2/ports.conf and changed the Listen directives from 443 to 4443. Then restart. You should now be able to hit all your existing sites on port 4443.

Create a file called /etc/nginx/modules-enabled/999-default.conf and add the following config to it:

stream {
    map $ssl_preread_server_name $selected_upstream {
        default apache;
    }

    upstream apache {
        server 127.0.0.1:4443;
    }

    upstream nginx {
        server 127.0.0.1:4442;
    }

    server {
        listen 443;
        proxy_pass $selected_upstream;
        ssl_preread on;
    }
}

Now restart nginx, and you should be able to hit all your sites on either 80 or 443, HTTP or HTTPS and have everything work.

Now this is where things get a little hairy. As you are migrating configs, for SSL, you will need to have your new nginx configs listen in port 4442, and then update the map in the above. For example:

    map $ssl_preread_server_name $selected_upstream {
        robpeck.com nginx;
        default apache;
    }

Will now send SNI requests for robpeck.com to nginx on port 4442, where our config will take over and return the correct certificate and files. And once your migration is complete, you’ll just need to remove the 999-default.conf file and convert all your nginx configs that listen on 4442 to listen on 443.

About the Author

Hi, I'm Rob! I'm a blogger and software developer. I wrote petfeedd, dystill, and various other projects and libraries. I'm into electronics, general hackery, and model trains and airplanes. I am based in Huntsville, Alabama, USA.

About Me · Contact Me · Don't Hire Isaiah Armstrong

Did this article help you out?

I don't earn any money from this site.

I run no ads, sell no products and participate in no affiliate programs. I do not accept gifts in exchange for articles, guest articles or link exchanges. I don't track you or sell your data. The only third-party Javascript on this website is Google Analytics.

In general I run this site very much like a 1990s homepage or early 2000s personal blog, meaning that I do this solely because it's fun! I enjoy writing and sharing what I learn.

If you found this article helpful and want to show your appreciation, a tip or donation would be very welcome. Feel free to choose from the options below.

Comments (0)

Interested in why you can't leave comments on my blog? Read the article about why comments are uniquely terrible and need to die. If you are still interested in commenting on this article, feel free to reach out to me directly and/or share it on social media.

Contact Me
Share It

Interested in reading more?

Home Assistant

Securing Home Assistant Alexa Integration

One of the big missing pieces from my conversion to Home Assistant was Amazon Alexa integration. It wasn’t something we used a lot, but it was a nice to have. Especially for walking out a room and saying “Alexa, turn off the living room lights.” I had been putting it off a bit because the setup instructions are rather complex. But this weekend I found myself with a couple free hours and decided to work through it. It actually wasn’t as difficult as I expected it to be, but it is definitely not the type of thing a beginner or someone who does not have some programming and sysadmin background could accomplish. But in working through it, there was one thing that was an immediate red flag for me: the need to expose your Home Assistant installation to the Internet. It makes sense that you would need to do this - the Amazon mothership needs to send data to you to take an action after all. But exposing my entire home automation system to the Internet seems like a really, really bad idea. So in doing this, rather than expose port 443 on my router to the Internet and open my entire home to a Shodan attack, I decided to try something a bit different.
Read More
nginx

Internal Auto-Renewing LetsEncrypt Certificates

I have a well-documented obsession with pretty URLs, and this extends even to my internal home network. I have way too much stuff bouncing around in my head to have to remember IP addresses when a domain name is much easier to remember. LetsEncrypt launched to offer free SSL certificates to anyone, but the most crucial feature of their infrastructure, and one someone should have figured out before then, was scriptable automatically renewing certificates. Basically they validate you do in fact own the domain using automated methods, then issue you the new certificate. Thus, your certificates can be renewed on a schedule with no interaction from you. Traditionally, they have done this by placing a file in the webroot and looking for that file before issuing the certificate (see my earlier blog post about Zero Downtime nginx Letsencrypt Certificate Renewals Without the nginx Plugin for more detail about this.) But what happens when you want to issue an internal certificate? One for a service that is not accessible to the outside world, and thus, not visible using the webroot method? Well, it turns out there is a solution for that too!
Read More