Next-gen infrastructure part 2

Around the end of 2016 I wrote a longer article about the state of the IT infrastructure, trying to single out a trend I was observing. I was clearly inspired by Stanislaw Lem’s books as well as my own deep-dive sessions into technology. My conclusions back then were a vision of container or unikernel approach written directly into programmable field arrays by means of combining standard operations in hardware with micro-service architecture, but there was a substantial challenge to be overcome first.

Recently, my friend Matt Dziubinski shared with me an excellent article published under the Electronic Engineering Journal by Kevin Morris that seems to show where things are taking off. Titled “Accelerating Mainstream Services with FPGAs”, the author brings up how Intel acquired Altera, a major player in the FPGA market. It’s been a while since then with many comments suggesting Intel is bringing the hardware infrastructure to the next level. Since then however, the world hasn’t heard about a brain child of the top chip manufacturer fused with hardware accelerators from Altera. How is this merger really driving acceleration in the data center?

The major risk I saw back when I wrote my article was the enterprises’ capability to adopt such technology, given talent scarcity, especially for such low-level tinkering on mass scale. That type of activity is usually reserved for hyperscalers and high frequency trading technology. According to Intel statements and the EEJ article itself, Intel is moving forward in a slightly different direction by launching PACs (news statement), or programmable acceleration cards, which are based on Altera’s Arria 10 GX FPGAs and come with a PCIe interface. That’s right – the smart people at Intel have addressed the challenge by allowing specialized companies tune acceleration cards on per-need basis, which then they can simply insert into their boxes of preference: Dell, HP or Fujitsu. I am guessing integration with blade-type infrastructure is a matter of time as well. This way, enterprises don’t need to hire FPGA programmers with years of Verilog experience anymore. In a consolidating market, that’s a major advantage.

And now, most importantly, a glimpse at the numbers. According to the article on EJJ: In financial risk analysis, there’s an 850% per-symbol algorithm speedup and a greater than 2x simulation time speedup compared with traditional “Spark” implementation. On database acceleration, Intel claims 20X+ faster real-time data analytics, 2x+ traditional data warehousing, and 3x+ storage compression. And that speed-up is without considering the upcoming HBM2 memory and 7nm chip manufacturing process (The FPGAs are on 20nm themselves).

Gmail with own domain

Gmail seems to be everyone’s favorite web frontend for email. Until recently, it has also had the option to allow sending from custom domains, so the recipient would see “from: yourname@yourdomain.com” for example instead of the not-so-professional name@gmail.com. These days however Google is promoting their G-suite set of products, which make this modification a bit harder if your domain is purchased from an external vendor. Here’s a brief article explaining how to set up your own domain as the default “from” domain in Gmail.

First of all, to avoid reinventing the wheel, I first googled (heh) for existing approaches and found many cases where an external MTA performs authenticated submission to gmail.smtp.com. This is sort of weird, but apparently that’s how Google is fighting spam and email address spoofing. I tried that approach only to find out that in addition to the mandatory authentication (MTA to MTA with passwords?), Google also modifies the header’s “from” field in incoming messages, stamping in the gmail account and moving the previous address to a new header line called “X-Google-Original-From”. As you can imagine, it makes things difficult to manage. In addition to that, Gmail would re-deliver these messages back at where the MX records point, despite their desired configuration was in place, so I had to create a black hole rule to prevent SMTP flooding (discard directive).

For that reason I tried a different approach. Here’s a brief explanation how to set this up using Exim as the MTA (but any other SMTP server would do). In this example, the MX records should point to the external server running the MTA (don’t forget the dot at the end). For outbound mail, Gmail will act as a client (MUA), using Exim as the MTA to authenticate over TLS and send mail out. The Gmail configuration doesn’t change and is explained here. For this to work, authentication data need to be created on the MTA and one more thing: header rewriting at SMTP time, if the domain you’re configuring now isn’t the same as the primary FQDN of the MTA (or, if you allow clients to send with multiple domains from the same server/container). Therefore, to have mail go out with the right FQDN, a rewrite rule like this is required:


begin rewrite
\N^my_name@my_fqdn.com$\N my_name@newdomain.com Sh

As for incoming mail, the task is fairly easy. Instead of authenticating to gmail, I redirect/forward the messages to the original gmail account after accepting them as local. This can be achieved by creating an exception for the default redirect route (which normally reads /etc/aliases for redirection paths), by adding a condition to match the new domain in question. Here’s an example:


begin routers

my_new_redirect:
driver = redirect
domains = newdomain.com
data = ${lookup{$local_part}lsearch{/etc/aliases}}
file_transport = address_file
pipe_transport = address_pipe

Any file could be used instead of /etc/aliases, just make sure the UID/GID with which your MTA runs can read it. The format would be, following this example: “my_name: gmail_name@gmail.com”. And that’s all – it’s SPF-friendly and IMHO cleaner and simpler than the authenticated approach. You might get cursed for rewriting headers by SMTP purists but well, Google does it too.