• 7 min read
  • My current server monitoring setup is documented in my CentOS 5 server tutorials. It consists of Nagios for service monitoring and Cacti for graphing of metrics including system load, network and disk space.

    Both tools are very commonly used and lots of resources are available on their setup & configuration, but I never kicked the feeling that they were plain clunky. Over the past several months, I have performed several research and evaluated a variety of tools and thankfully came across the monitoring sucks effort which aims to document a bunch of blog posts on monitoring tools and their different merits and weaknesses. The collection of all documentation the is now kept in the monitoring sucks GitHub repo.

    Long story short, each tool seems to only do part of the job. I hate redundancy, and I believe that a good monitoring system would:

    1. provide an overview of the current service status;
    2. notify you appropriately and timely when things go wrong; and
    3. provide a historical overview of data to establish some sort of baseline / normal level for collected metrics (i.e graphs and 99-percentiles)
    4. ideally, be able to react proactively when things go wrong

    You'll find that most tools will do two of four above well, which is just enough to be annoyingly useful. You'll need to implement 2-3 overlapping tools that do one thing well and the other just okay. Well, I don't like to live with workarounds.

    Choosing the right tool for the job

    I did a bit of research and solicited some advice on r/sysadmin, but sadly it did not get enough upvotes to be very noticed. Collectd looked like a wonderful utility. It is simple, high-performance and focused on doing one thing well. It was trivial to get it writing tons of system metrics to RRD files, at which point Visage provided a smooth user interface. Although it was a step in the right direction as far as what I was looking for, it still only did two of the four items above.

    Introducing Riemann

    Then, I stumbled across Riemann through his Monitorama 2013 presentation. Although not the easiest to configure and its notification support is a bit lacking, it has several features that immediately piqued my interest:

    • Its architecture forgoes the traditional polling and instead processes arbitrary event streams.
      • Events can contain data (the metric) as well as other information (hostname, service, state, timestamp, tags, ttl)
      • Events can be filtered by their attributes and transformed (percentiles, rolling averages, etc)
      • Monitoring up new machines is as easy as pushing to your Riemann server from the new host
      • Embed a Riemann client into your application or web service and easily add application level metrics
      • Let collectd do what it does best and have it shove the machine's health metrics to Riemann as an event stream
    • It is built for scale, and can handle thousands of events per second
    • Bindings (clients) are available in multitudes of languages
    • Has (somewhat primitive) support for notifications and reacting to service failures, but Riemann is extensible so you can add what you need
    • An awesome, configurable dashboard

    All of this is described more adequately and in greater detail on its homepage. So how do you get it?

    Installing Riemann

    This assumes you are running CentOS 6 or more better (e.g. recent version of Fedora). In the case of CentOS, it also assumes that you have installed the EPEL repository.

    yum install ruby rubygems jre-1.6.0
    gem install riemann-tools daemonize
    rpm -Uhv http://aphyr.com/riemann/riemann-0.2.4-1.noarch.rpm
    chkconfig riemann on
    service riemann start

    Be sure to open ports 5555 (both TCP and UDP), 5556 (TCP) and in your firewall. Riemann will uses 5555 for event submission, 5556 for a WebSockets connection to the server.

    Riemann is now ready to go and accept events. You can modify your configuration at /etc/riemann/riemann.config as required - here is a sample from my test installation:

    ; -*- mode: clojure; -*-
    ; vim: filetype=clojure

    (logging/init :file "/var/log/riemann/riemann.log")

    ; Listen on the local interface over TCP (5555), UDP (5555), and websockets (5556)
    (let [host "my.hostname.tld"]
      (tcp-server :host host)
      (udp-server :host host)
      (ws-server  :host host))

    ; Expire old events from the index.
    (periodically-expire 5)

    ; Custom stuffs

    ; Graphite server - connection pool
    (def graph (graphite {:host "localhost"}))
    ; Email handler
    (def email (mailer {:from "riemann@my.hostname.tld"}))

    ; Keep events in the index for 5 minutes by default.
    (let [index (default :ttl 300 (update-index (index)))]

      ; Inbound events will be passed to these streams:
      (streams

        (where (tagged "rollingavg")
          (rate 5
            (percentiles 15 [0.5 0.95 0.99] index)
            index graph
          )
          (else
            index graph
          )
        )

        ; Calculate an overall rate of events.
        (with {:metric 1 :host nil :state "ok" :service "events/sec" :ttl 5}
          (rate 5 index))

        ; Log expired events.
        (expired
          (fn [event] (info "expired" event)))
    ))

    The default configuration was modified here to do a few things differently:

    • Expire old events after only 5 seconds
    • Automatically calculate percentiles for events tagged with rollingavg
    • Send all event data to Graphite for graphing and archival
    • Set an email handler that, with some minor changes, could be used to send service state change notifications

    Installing Graphite

    Graphite can take data processed by Riemann and store it long-term, while also giving you tons of neat graphs.

    yum --enablerepo=epel-testing install python-carbon python-whisper graphite-web httpd

    We now need to edit /etc/carbon/storage-schemas.conf to tweak the time density of retained metrics. Since Riemann supports processing events quickly, I like to retain events at a higher precision than the default settings:

    # Schema definitions for Whisper files. Entries are scanned in order,
    # and first match wins. This file is scanned for changes every 60 seconds.
    #
    #  [name]
    #  pattern = regex
    #  retentions = timePerPoint:timeToStore, timePerPoint:timeToStore, ...

    # Carbon's internal metrics. This entry should match what is specified in
    # CARBON_METRIC_PREFIX and CARBON_METRIC_INTERVAL settings
    [carbon]
    pattern = ^carbon\.
    retentions = 60:90d

    #[default_1min_for_1day]
    #pattern = .*
    #retentions = 60s:1d

    [primary]
    pattern = .*
    retentions = 10s:1h, 1m:7d, 15m:30d, 1h:2y

    After making your changes, start the carbon-cache service:

    service carbon-cache start
    chkconfig carbon-cache on
    touch /etc/carbon/storage-aggregation.conf

    Now that Graphite's storage backend, Carbon, is running, we need to start Graphite:

    python /usr/lib/python2.6/site-packages/graphite/manage.py syncdb
    chown apache:apache /var/lib/graphite-web/graphite.db
    service httpd graceful

    Graphite should now be available on http://localhost - if this is undesirable, edit /etc/httpd/conf.d/graphite-web.conf and map it to a different hostname / URL according to your needs.

    Note: as of writing, there's a bug in the version of python-carbon shipped with EL6 that complains incessantly to your logs if the storage-aggregation.conf configuration file doesn't exist. Let's create it to avoid a hundred-megabyte log file:

    touch /etc/carbon/storage-aggregation.conf

    But what about EL5

    I am not going to detail how to install the full Riemann server on EL5, as the dependencies are far behind and it would require quite a bit of work. However, it is possible to install riemann-tools on RHEL/CentOS 5 for monitoring the machine with minimal work.

    The rieman-health initscript requires the 'daemonize' command, install it via yum (EL6) or obtain it for EL5 here: http://pkgs.repoforge.org/daemonize/

    The riemann-tools ruby gem and its dependencies will require a few development packages in order to build, as well as Karan's repo providing an updated ruby-1.8.7:

    cat << EOF >> /etc/yum.repos.d/karan-ruby.repo
    [kbs-el5-rb187]
    name=kbs-el5-rb187
    enabled=1
    baseurl=http://centos.karan.org/el\$releasever/ruby187/\$basearch/
    gpgcheck=1
    gpgkey=http://centos.karan.org/RPM-GPG-KEY-karan.org.txt
    EOF
    yum update ruby\*
    yum install ruby-devel libxml2-devel libxslt-devel libgcrypt-devel libgpg-error-devel
    gem install riemann-tools --no-ri --no-rdoc
  • 9 min read
  • This how-to will show you how to configure:

    • Install ZFS via the ZFS on Linux project
    • Create and administer your ZFS data pools
    • Monitor disk health

    Build considerations & preparation

    Hardware plays a large role in the performance and integrity of the your ZFS file server. Although ZFS will function on a variety of commodity hardware, you should consider the following before proceeding:

    ECC RAM

    The question of using non-ECC RAM gets asked again and again, but the bottom line is you do need it. ZFS does its best to protect your data and ensure its integrity, but it cannot do so if the memory it uses cannot be trusted. ZFS is an advanced filesystem that can self-heal your files when silent bit rot occurs (bit flips on the disk from bad sectors or cosmic rays). When this error is discovered, it can attempt to self-heal the file. What if the information on disk is OK, but an undetected bit flip occurs in your RAM? ZFS could attempt to "self-heal" and actually cause a corruption in your data because the information it received from RAM was incorrect.

    ZFS will run just fine without ECC RAM but you run the risk of silent data corruption and (although very unlikely) losing the zpool entirely if your metadata gets corrupted in RAM and then subsequently written to disk. The chance of random bit flips of small, but if your RAM stick is going bad and is riddling your filesystem with errors, you do not want to run the risk of catching that too late and losing everything.

    Keep in mind that in order to use ECC RAM, you must buy a motherboard AND CPU that both support it. There are also buffered (also known as registered) DIMMS and non-buffered DIMMS. Buffered DIMMS tend to be slower, more expensive, but scale much better (e.g. a single board could support up to 192GB RAM) while unbuffered ECC RAM tends to be less expensive, performs better but doesn't scale as high (maximum of 32GB RAM on most current boards).

    A more detailed analysis on this topic is available in this FreeNAS forum post.

    Sufficient RAM for ARC cache

    Conventional wisdom is that you should plan to allocate 1GB RAM per TB of usable disk space in your ZFS filesystem. ZFS will run on far less (i.e. 4GB), however then you have little space available for your ARC cache and your read performance may suffer. Plan ahead and buy enough RAM from the start, or be sure that you'll be able to get your hands on additional DIMMs later if you plan on adding additional disks later.

    RAID modes

    ZFS offers RAID modes similar to RAID 0, RAID 1, RAID 5, RAID 6. ZFS uses the terms stripe, mirror, RAIDZ1 and RAIDZ2 respectively. It also offers a new type RAIDZ3, which one-ups RAID 6 and can tolerates 3 disk failures.

    If you are unsure which pool type you would like to use, there is a very good and detailed comparison here. As the article points out, if you can afford it striped mirrors (mirrored disks combined into a pool - effectively a RAID 0 of several groups of 2 disks in RAID 1) offers the best performance. However, you'll lose 50% of your usable disk capacity at a minimum, 66% if you want to be able to sustain two drive losses (which I highly recommend you do).

    If you don't mind limiting performance to the equivalent of a single disk, RAIDZ2 is your best choice. It offers at worst a 40% loss in usable disk capacity and that number shrinks as if you add more disks. A RAIDZ2 with 6 disks, for example, only loses 2/6 disks to parity (33%). Always remember that RAID is redundancy, not a backup!

    Unrecoverable Read Error (URE)

    Consumer hardware has become extremely inexpensive for the capacity it can offer, however it's not perfect. All hard disks are manufactured with a mean time between failure (MTBF) and non-recoverable bit error rate specified. MBTF is nothing to worry about, as we can simply swap the disk out for a functioning one when it fails. The point of interest here is the non-recoverable bit error rate, which for consumer disks is typically 1 out of every 10^14 bits read. This means that if you read 10^14 bits from your disk, on average one bit is unrecoverable unreadable and irreparably lost.

    This is a significant problem with modern disk sizes, as if a drive in RAID were to fail and be replaced, during the reconstruction process several TB of data from multiple disks would be read and there's a significant (often above 50% - calculator here) that a single URE will be encountered. In a traditional RAID setup, the controller cannot proceed and reconstruction ends. Your data is lost.

    However, because ZFS is in control of both the filesystem and disks in a software RAIDZ, it can degrade gracefully should you encounter a URE. it can actually know exactly where that bit fell. Instead of dropping your array, it simply notifies you which file was lost and moves on with the reconstruction. ZFS is also aware of free space, and so doesn't need to waste time reconstructing the free space on a replacement disk.

    Disk controllers

    Although your hardware may support RAID, do not use it. RAIDZ2 is a software RAID implementation that works best when ZFS has direct control over your disks. Running ZFS on top of a hardware RAID array eliminates some of the advantages of ZFS, such as being able to gracefully recover from a Unrecoverable Read Error (URE). More on this below.

    If you want to add additional disks and are looking to buy a PCIe add-in card, ensure that you purchase an HBA (Host Bus Adapter) that will present the disks as JBOD and not a RAID-only controller. An excellent HBA card is the IBM M1015 cross-flashed to IT mode which offers excellent performance for the price.

    Optimizing the number of disks

    In addition to above, consider that number of disks you choose to use in your pool can also have an impact on performance. Adam Nowacki posted this helpful data on the freebsd-fs mailing list (emphasis mine):

    Free space calculation is done with the assumption of 128k block size.
    Each block is completely independent so sector aligned and no parity
    shared between blocks. This creates overhead unless the number of disks
    minus raidz level is a power of two.
    Above that is allocation overhead
    where each block (together with parity) is padded to occupy the multiple
    of raidz level plus 1 (sectors). Zero overhead from both happens at
    raidz1 with 2, 3, 5, 9 and 17 disks and raidz2 with 3, 6 or 18 disks.

    Personally, I recommend RAIDZ2 with 6 disks - it offers a very nice balance between the cost of disks, performance and redundancy.

    Installing ZFS

    sudo yum localinstall --nogpgcheck http://archive.zfsonlinux.org/fedora/zfs-release$(rpm -E %dist).noarch.rpm
    sudo yum install zfs

    Reboot your machine and you should be ready to create a zpool.

    Create the zpool

    Now that ZFS is installed, creating the zpool is relatively straightforward. The ArchLinux Wiki ZFS page details several zpool creation examples.

    zpool create -f [poolname] [type] [disks]
    zfs set atime=off [poolname]
    zfs set compression=lz4 [poolname]

    Replace poolname with the name of your zpool (e.g. "data" or "tank"), [type] with the ZFS pool type (e.g. raidz2) and finally [disks] with the disk you wish to use to create the zpool. There are several ways to specify the disks; see the ZFS on Linux FAQ for how to best How to choose device names.

    Note that the contents of these disks will be erased and ZFS will resume control over the partition table & disk data.

    Create one or more datasets

    ZFS datasets (or "filesystems") behave like multiple filesystems on a disk would, except they are all backed by the same storage pool. You can divide your pool into several filesystems, each with different options and mountpoints, and the free space is shared among all filesystems on the pool.

    zfs create -o casesensitivity=mixed -o mountpoint=/[poolname]/[dataset] [poolname]/[dataset]

    Automatic scrubbing

    To ensure all disks are synchronized and proactively detect and bit rot, you can automatically scrub disks at night once a week:

    cat << EOF > /etc/cron.d/zfs-check
    0 0 1 * * root /usr/sbin/zpool scrub [poolname]
    EOF

    Remember to replace [poolname] as per above. Use zpool status -v to get the pool status and display any scrub errors.

    Receiving email notifications

    Installing an MTA

    All of ZFS's fancy data protection features are useless if we cannot respond quickly to a problem. Since Fedora 20 does not include a Mail Transfer Agent (MTA) by default, install one now to ensure we can receive email notifications when a disk goes bad:

    yum install postfix
    cat << EOF >> /etc/postfix
    myhostname = yourname.dyndns.org
    relayhost = mailserver.com:port
    EOF
    systemctl enable postfix
    systemctl start postfix

    You need to configure your myhostname to be something valid; in this case, I have chosen a free DynDNS hostname. Most ISPs block port 25, so you will need to use their mail server coordinates for relayhost or alternatively, you can always setup a free GMail account and use GMail as your relay on an alternate port (e.g. 587).

    Monitoring SMART disk health information

    The smartd daemon an monitor your disks health and notify you immediately should an error turn up.

    yum install smartmontools
    systemctl enable smartd

    Edit the /etc/smartmontools/smartd.conf and change the -m root flag to point to the desired email address, for example -m myaddress@gmail.com. To test if notifications are working correctly, add the line DEVICESCAN -H -m s.adam@diffingo.com -M test to the configuration and then restart smartd:

    systemctl restart smartd

    Resources

  • 1 min read
  • Work has kept me busy lately so it's been a while since my last post... I have been doing lots of research and collecting lots of information over the holiday break and I'm happy to say that in the coming days I will be posting a new server setup guide, this time for a server that is capable of running redundant storage (ZFS RAIDZ2), sharing home media (Plex Media Server, SMB, AFP) as well as a full Windows 7 gaming rig simultaneously!

    Windows runs in a virtual machine and is assigned it's own real graphics card from the host's hardware using the using the brand-new VFIO PCI passthrough technique with the VGA quirks enabled. This does require a motherboard and CPU with support for IOMMU, more commonly known as VT-d or AMD-Vi.

  • 2 min read
  • I had the need to setup a new VM for software testing today, and I kept running into intermittent problems where VirtualBox would freeze and then an OS X kernel panic, freezing/crashing the entire machine.

    Luckily, I had made a snapshot in the OS moments earlier to the crash so I had a safe place to revert to, but the crashes kept happing at seemingly random times.

    I setup a looped execution of 'dmesg' to see what was going on just before the crash and saw this at the next freeze:

    VBoxDrv: host_vmxon  -> vmx_use_count=1 rc=0
    VBoxDrv: host_vmxoff -> vmx_use_count=0
    VBoxDrv: host_vmxon  -> vmx_use_count=1 rc=0
    aio_queue_async_request(): too many in flight for proc: 16.
    aio_queue_async_request(): too many in flight for proc: 16.
    aio_queue_async_request(): too many in flight for proc: 16.
    aio_queue_async_request(): too many in flight for proc: 16.
    aio_queue_async_request(): too many in flight for proc: 16.
    aio_queue_async_request(): too many in flight for proc: 16.

    The first VBoxDrv messages didn't pull anything interesting in Google, but the other messages did: Virtual Box ticket #11219 and this blog post.

    It would appear that the default limits for the OS X kernel's asynchronous I/O are very, very low. VirtualBox likely exceeds them when your VM(s) are performing heavy disk I/O, hence the 'too many in flight' message in the logs.

    Luckily for us, there's a quick and easy solution:

    sudo sysctl -w  kern.aiomax=512 kern.aioprocmax=128 kern.aiothreads=16

    then restart VirtualBox. These settings will apply until you reboot. To make the changes permanent, add/update the following lines in /etc/sysctl.conf:

    kern.aiomax=512
    kern.aioprocmax=128
    kern.aiothreads=16

    Note: you can probably set those limits even higher, as documentation for Sybase (by SAP) recommends values 2048 / 1024 / 16 when using its software.

  • 5 min read
  • I have been testing the Drupal support module locally which features the ability to create tickets from email messages to an IMAP inbox. It requires the imap_open() PHP function provided by the imap PHP extension, which unfortunately is not included in the OS X builds of PHP.

    ivucica has published a wonderful script to his blog that compiles the IMAP extension without having to recompile PHP entirely, but unfortunately it was not working for me and nobody else seemed to have my problem either. Compiling the imap library and PCRE went very smoothly, but when it came time to build the PHP extension this error appeared during ./configure:

    checking whether build with IMAP works... no
    configure: error: build test failed. Please check the config.log for details.

    Well, crap. I check config.log and determine it's a linking failure:

    configure: program exited with status 1
    configure: failed program was:
    | /* confdefs.h */
    | #define PACKAGE_NAME ""
    | #define PACKAGE_TARNAME ""
    | #define PACKAGE_VERSION ""
    | #define PACKAGE_STRING ""
    | #define PACKAGE_BUGREPORT ""
    | #define PACKAGE_URL ""
    | #define COMPILE_DL_IMAP 1
    | #define HAVE_IMAP 1
    | #define HAVE_IMAP2000 1
    | #define HAVE_IMAP2004 1
    | #define HAVE_NEW_MIME2TEXT 1
    | #define HAVE_LIBPAM 1
    | #define HAVE_IMAP_KRB 1
    | #define HAVE_IMAP_SSL 1
    | /* end confdefs.h.  */
    |
    |
    | #if defined(__GNUC__) && __GNUC__ >= 4
    | # define PHP_IMAP_EXPORT __attribute__ ((visibility("default")))
    | #else
    | # define PHP_IMAP_EXPORT
    | #endif
    |
    |       PHP_IMAP_EXPORT void mm_log(void){}
    |       PHP_IMAP_EXPORT void mm_dlog(void){}
    |       PHP_IMAP_EXPORT void mm_flags(void){}
    |       PHP_IMAP_EXPORT void mm_fatal(void){}
    |       PHP_IMAP_EXPORT void mm_critical(void){}
    |       PHP_IMAP_EXPORT void mm_nocritical(void){}
    |       PHP_IMAP_EXPORT void mm_notify(void){}
    |       PHP_IMAP_EXPORT void mm_login(void){}
    |       PHP_IMAP_EXPORT void mm_diskerror(void){}
    |       PHP_IMAP_EXPORT void mm_status(void){}
    |       PHP_IMAP_EXPORT void mm_lsub(void){}
    |       PHP_IMAP_EXPORT void mm_list(void){}
    |       PHP_IMAP_EXPORT void mm_exists(void){}
    |       PHP_IMAP_EXPORT void mm_searched(void){}
    |       PHP_IMAP_EXPORT void mm_expunged(void){}
    |       void rfc822_output_address_list(void);
    |       void (*f)(void);
    |       char foobar () {f = rfc822_output_address_list;}
    |
    |     char foobar();
    |     int main() {
    |       foobar();
    |       return 0;
    |     }
    |
    configure:6808: result: no
    configure:6819: checking whether build with IMAP works
    configure:6863: cc -o conftest -g -O2   conftest.c  -Wl,-rpath,/usr/local/imap-2007f/lib -L/usr/local/imap-2007f/lib -lc-client -lpam  -lkrb5  >&5
    Undefined symbols for architecture x86_64:
      "_BIO_free", referenced from:
          _ssl_start in libc-client.a(osdep.o)
      "_BIO_new_mem_buf", referenced from:
          _ssl_start in libc-client.a(osdep.o)
      "_BIO_new_socket", referenced from:
          _ssl_start in libc-client.a(osdep.o)
      "_ERR_error_string", referenced from:
          _ssl_server_init in libc-client.a(osdep.o)
          _ssl_genkey in libc-client.a(osdep.o)
      "_ERR_get_error", referenced from:
          _ssl_server_init in libc-client.a(osdep.o)
          _ssl_genkey in libc-client.a(osdep.o)
      "_ERR_load_crypto_strings", referenced from:
          _ssl_server_init in libc-client.a(osdep.o)
      "_EVP_PKEY_free", referenced from:
          _ssl_start in libc-client.a(osdep.o)
      "_PEM_read_bio_PrivateKey", referenced from:
          _ssl_start in libc-client.a(osdep.o)
      "_PEM_read_bio_X509", referenced from:
          _ssl_start in libc-client.a(osdep.o)
      "_RAND_seed", referenced from:
          _ssl_onceonlyinit in libc-client.a(osdep.o)
      "_RSA_generate_key", referenced from:
          _ssl_genkey in libc-client.a(osdep.o)
      "_SSL_CTX_ctrl", referenced from:
          _ssl_server_init in libc-client.a(osdep.o)
          _ssl_start in libc-client.a(osdep.o)
      "_SSL_CTX_free", referenced from:
          _ssl_abort in libc-client.a(osdep.o)
      "_SSL_CTX_load_verify_locations", referenced from:
          _ssl_start in libc-client.a(osdep.o)
      "_SSL_CTX_new", referenced from:
          _ssl_server_init in libc-client.a(osdep.o)
          _ssl_start in libc-client.a(osdep.o)
      "_SSL_CTX_set_cipher_list", referenced from:
          _ssl_server_init in libc-client.a(osdep.o)
      "_SSL_CTX_set_default_verify_paths", referenced from:
          _ssl_start in libc-client.a(osdep.o)
      "_SSL_CTX_set_tmp_rsa_callback", referenced from:
          _ssl_server_init in libc-client.a(osdep.o)
      "_SSL_CTX_set_verify", referenced from:
          _ssl_start in libc-client.a(osdep.o)
      "_SSL_CTX_use_PrivateKey", referenced from:
          _ssl_start in libc-client.a(osdep.o)
      "_SSL_CTX_use_RSAPrivateKey_file", referenced from:
          _ssl_server_init in libc-client.a(osdep.o)
      "_SSL_CTX_use_certificate", referenced from:
          _ssl_start in libc-client.a(osdep.o)
      "_SSL_CTX_use_certificate_chain_file", referenced from:
          _ssl_server_init in libc-client.a(osdep.o)
      "_SSL_accept", referenced from:
          _ssl_server_init in libc-client.a(osdep.o)
      "_SSL_ctrl", referenced from:
          _ssl_start in libc-client.a(osdep.o)
      "_SSL_free", referenced from:
          _ssl_abort in libc-client.a(osdep.o)
      "_SSL_get_error", referenced from:
          _ssl_getdata in libc-client.a(osdep.o)
          _ssl_sout in libc-client.a(osdep.o)
      "_SSL_get_fd", referenced from:
          _ssl_getdata in libc-client.a(osdep.o)
          _ssl_server_input_wait in libc-client.a(osdep.o)
      "_SSL_get_peer_certificate", referenced from:
          _ssl_start in libc-client.a(osdep.o)
      "_SSL_library_init", referenced from:
          _ssl_onceonlyinit in libc-client.a(osdep.o)
      "_SSL_load_error_strings", referenced from:
          _ssl_server_init in libc-client.a(osdep.o)
      "_SSL_new", referenced from:
          _ssl_server_init in libc-client.a(osdep.o)
          _ssl_start in libc-client.a(osdep.o)
      "_SSL_pending", referenced from:
          _ssl_getdata in libc-client.a(osdep.o)
          _ssl_server_input_wait in libc-client.a(osdep.o)
      "_SSL_read", referenced from:
          _ssl_getdata in libc-client.a(osdep.o)
          _ssl_server_input_wait in libc-client.a(osdep.o)
      "_SSL_set_bio", referenced from:
          _ssl_start in libc-client.a(osdep.o)
      "_SSL_set_connect_state", referenced from:
          _ssl_start in libc-client.a(osdep.o)
      "_SSL_set_fd", referenced from:
          _ssl_server_init in libc-client.a(osdep.o)
      "_SSL_shutdown", referenced from:
          _ssl_abort in libc-client.a(osdep.o)
      "_SSL_state", referenced from:
          _ssl_start in libc-client.a(osdep.o)
      "_SSL_write", referenced from:
          _ssl_start in libc-client.a(osdep.o)
          _ssl_sout in libc-client.a(osdep.o)
      "_SSLv23_client_method", referenced from:
          _ssl_start in libc-client.a(osdep.o)
      "_SSLv23_server_method", referenced from:
          _ssl_server_init in libc-client.a(osdep.o)
      "_TLSv1_client_method", referenced from:
          _ssl_start in libc-client.a(osdep.o)
      "_TLSv1_server_method", referenced from:
          _ssl_server_init in libc-client.a(osdep.o)
      "_X509_NAME_oneline", referenced from:
          _ssl_open_verify in libc-client.a(osdep.o)
      "_X509_STORE_CTX_get_current_cert", referenced from:
          _ssl_open_verify in libc-client.a(osdep.o)
      "_X509_STORE_CTX_get_error", referenced from:
          _ssl_open_verify in libc-client.a(osdep.o)
      "_X509_free", referenced from:
          _ssl_start in libc-client.a(osdep.o)
      "_X509_get_ext_d2i", referenced from:
          _ssl_start in libc-client.a(osdep.o)
      "_X509_get_subject_name", referenced from:
          _ssl_open_verify in libc-client.a(osdep.o)
      "_X509_verify_cert_error_string", referenced from:
          _ssl_open_verify in libc-client.a(osdep.o)
      "_sk_num", referenced from:
          _ssl_start in libc-client.a(osdep.o)
      "_sk_value", referenced from:
          _ssl_start in libc-client.a(osdep.o)
    ld: symbol(s) not found for architecture x86_64
    clang: error: linker command failed with exit code 1 (use -v to see invocation)

    I couldn't figure out why it wasn't picking up the symbols from libssl, even when manually trying to compile said file and adding a -lssl flag.

    After an hour of struggling with it and my debugging efforts going nowhere, I try adding -lcrypto for the hell of it and it works!

    tl;dr, if you get this error then simply replace the following line of the aformentioned script:

    ./configure --with-imap=/usr/local/imap-2007f --with-kerberos --with-imap-ssl

    With the following line that adds the required linker flags:

    LDFLAGS="-lssl -lcrypto" ./configure --with-imap=/usr/local/imap-2007f --with-kerberos --with-imap-ssl

    That's it!