for( auto& post : post_stream ) { …

  • Bermuda Growth Rates

    Continuing the life of a grass-blundering fool, I’m still committed to the slow expansion of Bermuda grass, even though I’ve plugged parts of my lawn with St. Augustine where things have gone bad.

    You’re not supposed to seed a Bermuda lawn because of color mismatch — or so I am told on the lawn care forums. The patch growing on the west side of my lawn has this deep, dark emerald color that I wish I could match. But, just to test the theory out on seed coloration, I bought two random bags of grass and put the seeds on both sides of the pot:


    Supposedly, the Pennington “Smart Seed” variety was supposed to grow at 2x the rate. The seeds themselves were a different shape and color than the seeds from the Scott’s bag. Below in the photograph, I put the Pennington on the left of the pot and then the Scott’s on the right side of the pot. (The pot just has some potting soil in it, but I don’t think this affected the result too much.)

    Sure enough, around 2.5 weeks out, the Pennington grass is growing much much more quickly. The Scott’s grass is there if you look close enough.

    More importantly, though, the coloration on this grass is nowhere near the deep dark emerald color that I have growing on the lawn now. The forums are correct about seeds. I plan to transplant some of these seeds into an isolated zone to tinker with the fertilization parameters in different areas and test out different shade levels. Let’s see how that goes.

  • Blundering and Watching the Grass Grow

    I moved to Florida a few years ago, partly out of frustration with life during the pandemic where I previously lived. There was a lot going on, and life was complicated, but I managed to acquire tenant status at a few rental properties to occupy my time as I awaited what felt like an inevitable COVID-19 demise. The terms of the lease agreements required me to either take care of the lawns myself or hire a professional. Initially, I thought this wouldn’t be a big deal. Grass is just grass—just cut it.

    All the rental properties I lived in had St. Augustine grass. I became familiar with this type of grass; however, most of the irrigation parameters had been preset for me. The owners had already scheduled the fertilizer according to protocols and had stated their cutting height preferences. Generally, I only needed to replace bare spots with plugs or sod. I did the bare minimum to keep things green and healthy. Despite everyone around me being quite unimpressed that I handled my own lawn care, I largely enjoyed it.

    Then, I got my own lawn.

    I thought what I had was a different strain of St. Augustine grass because it was much softer and more pleasant to step on than the crunchy St. Augustine grass I had in the rental properties. I liked the feel on my feet when the grass was cut lower, so I continued to cut the lawn shorter and shorter, not realizing that I had a predominantly St. Augustine lawn which does not fare well when cut too low.

    For one full season, I cut my grass short and it seemed like things were going okay. Then, at the start of spring, I tried to cut even lower. I thought this might eliminate the little dead, dormant patches and encourage fresher, greener growth.

    This was a mistake.

    While my lawn isn’t completely dead, it’s looking a bit worse for wear. I reduced the cutting, improved the fertilizer situation, and increased the watering. However, an interesting side effect of my mistake was the appearance of a new grassy invader: Bermuda grass.

    My lawn turned out to be a mix of Bermuda and St. Augustine, but the longer blades of St. Augustine had mostly kept the Bermuda at bay and had dominated the growth in the lawn. I was fascinated by the appearance of Bermuda. I ran my fingers and feet through it. It was slim, soft, felt incredible, and spread rapidly wherever I cut the grass low. The first time I encountered a substantial patch of it, I was astonished. This grass is beautiful, invasive, and aggressive—the more I read about Bermuda, the more I became intrigued.

    It’s still early days, and I haven’t committed to a full Bermuda lawn, but I’ve decided to take a measured approach and gradually expand the Bermuda’s presence without completely destroying the St. Augustine. These two grasses require different care, and it’s not entirely clear if I’ve done irreparable harm to the St. Augustine. However, I’ve resolved to let the St. Augustine dominant patches grow back to 4 inches, and where the Bermuda has taken hold, I plan to cut very low and use a reel mower to help it spread.

    Here’s a photo of a border region where Bermuda thrived after I mistakenly cut too low. The left side shows where Bermuda has exploded, and the right side is a failed attempt at verticutting what probably shouldn’t have been cut low and verticut.

    As a side note, I’m writing here on the blog because I will talk about grass with anyone willing to listen. It just so happens that no one in my orbit wants to hear about grass from me, so I figured I might as well return to blogging on my low-traffic blog and write about grass. There’s nothing too controversial about grass, right?

  • Selectively composing classes with constexpr functions, SFINAE, and parameter-pack driven private-inheritance in C++17 (Part 1)

    This blog post was inspired by a use-case I encountered in an open source project where the developers wanted to offer different functionality depending on whether the product being offered was an enterprise offering versus a community offering. Under normal circumstances, preprocessor defines were used to separate functionality. However, when certain combinations of behavior were required, the preprocessor macro situation made the code much messier and harder to unit test. This post highlights a proposed solution and a trick I find particularly helpful.

    The Basics

    I think private inheritance is undervalued in the C++ universe, particularly now in C++17 where we have things like if-constexpr and more sophisticated compile-time capabilities. As you likely already knew, private inheritance, in the object-oriented software development universe, means that there exists a “has-a” relationship. (While I touch on the basics here a little bit, if you’re not familiar with private inheritance mechanics, the article here is also very helpful.)

    Consider the following:

    class Machine : private Gear {
        // ...
    };

    In the above code, a Machine “has a” Gear. If in a particular application, say the Machine has a Gear and also a Pulley. The class might then look something like:

    class Machine : private Gear, private Pulley {
        // ...
    };

    So far so good. When someone downstream of you wants to use your Machine in a C++ program, it wouldn’t be proper to type cast your Machine to a Pulley; a Machine isn’t necessarily a Pulley. This is one of the benefits of private inheritance — you can’t cast through a base-class pointer to a derived class. An additional bonus with private inheritance is that, with ‘using’ statements, you can pull the interface of a base class into the derived class’ interface. For example:

    class Machine : private Gear { 
      // ... 
    public:
      using Gear::someMethod;
    };

    In the above example, Gear‘s someMethod() is now part of the public interface for Machine.

    Towards the Maintenance Quandary:

    Let’s move away from our Machine example and move onto another class, Feature. Consider the entry blurb to this blog post where I encountered a problem where I needed a Feature to function a certain way depending on whether it’s an Enterprise or Community build. For the sake of discussion, let’s say I add a Developer build also. Furthermore, accept the following:

    • Enterprise functionality is provided by the EnterpriseFeatureImplementation class
    • Community functionality is provided by the CommunityFeatureImplementation class
    • Developer functionality is provided by the DeveloperFeatureImplementation class

    Our Feature implementation class could conceivably use methods from EnterpriseFeatureImplementation, CommunityFeatureImplementation, or DeveloperFeatureImplementation. A possible implementation could be something like the following:

    
    template< typename FEATURE_IMPL >
    class FeatureImpl : private FEATURE_IMPL {
    public:
      using FEATURE_IMPL::featureSpecificMethod;
    };
    
    constexpr auto feature_selector() {
        if constexpr( is_enterprise_build() ) {
            return EnterpriseFeatureImplementation();
        } else if constexpr ( is_developer_build() ) {
            return DeveloperFeatureImplementation();
        } else { /* if( is_community_build() ) */ 
            return CommunityFeatureImplementation();
        } 
    };
    
    using Feature = FeatureImpl< decltype( feature_selector() ) >;

    (For your benefit, I’ve created an execution environment for you to test and tinker with the above here: https://www.godbolt.org/z/4iRhwU.)

    On the surface, the above is fairly flexible; we have a constexpr function that evaluates some function that lets us choose what interface to expose at compile-time. However, there are some things about the above that can get irritating during development cycles. Namely, the feature_selector() method needs to be updated with various criteria in order to select which interface to use. I consider this a maintenance headache (thus, a maintenance quandary.)

    A more practical approach involves the use of templates and parameter packs. The goal will be to detect automatically whether a supplied class provides the functionality we need. For example:

    template< typename... FEATURE_IMPLS >
    class FeatureImpl : private FEATURE_IMPLS... {
      ... // Code goes here
    };

    And the above would be used like so:

    using Feature = FeatureImpl< Enterprise, Developer, Community >;

    But, an issue remains: How do we choose which methods via ‘using’ to bring to the derived class interface? How do we detect automatically whether the class supplies functionality for the service level (Enterprise or Community) that we want?

    A Proposed Fix for Interface Method Resolution:

    Earlier, I referred to the utility of being able to pull from privately inherited classes so that you could build an interface for the derived class. If we’re using a parameter pack, this clearly becomes tricky. The approach I am proposing involves the use of constexpr functions and SFINAE — let’s start with the basics (and later expand towards fold-driven broadcast methods and CRTP in Part 2!)

    At the start of the blog post, I mentioned choosing between enterprise and community features in an application — a community-edition binary would not need to supply functionality that is present an enterprise-edition binary, for example. Consider code like the following — and note the comment in the code:

    template< typename... FEATURE_IMPLS >
    class Feature : private FEATURE_IMPLS... {
    public:
        // How do we choose from FEATURE_IMPLS for 'using' statements?
    };

    My approach (to the question posted in the comment) is to resolve whether each class within FEATURE_IMPLS fits certain criteria and “upgrade” based on whether the binary has been configured to be built for community edition or enterprise. I would use code such as the following:

    template< typename... FEATURE_IMPLS >
    class Feature : private FEATURE_IMPLS... {
    public:
       using InterfaceMethodSelector = decltype( FeatureDetails::resolve_function< FEATURE_IMPLS... >() );
       using InterfaceMethodSelector::methodToUse;
    };

    The Resolve Function’s Meta Function:

    Let’s detail what a possible resolve_function might look like. First, let’s start with a simple meta-function — in this case, we consider a meta function that determines whether a feature implementation defines a constexpr function that tells us whether a class implements an enterprise feature. We use SFINAE such that substitution failure resolves to a derivative of std::false_type.

    template< typename T, typename = void >
    struct HasEnterpriseFeature : std::false_type {};
    
    template< typename T >
    struct HasEnterpriseFeature< T, std::enable_if_t< T().isEnterpriseFeature() > > : std::true_type {};
    

    The Resolve Function Itself:

    The resolve function’s innards:

    // If we get to the last type, just return the last type.  The default
    // will just be whatever lands last in the parameter pack.
    template< typename FEATURE >
    constexpr auto resolve_function() {
        return FEATURE();
    }
    
    template< typename FEATURE_IMPL, typename FEATURE_IMPL2, typename... REST_OF_FEATURE_IMPLS >
    constexpr auto resolve_function() {
        if constexpr( HasEnterpriseFeature< FEATURE_IMPL >() ) {
            return FEATURE_IMPL();
        } else {
            return resolve_function< FEATURE_IMPL2, REST_OF_FEATURE_IMPLS... >();
        }
    }

    What the above resolve_function does is use a meta-function (HasEnterpriseFeature<T>) to determine if a member of the pack has a feature. If it doesn’t have the feature, the function moves on to the next member of the pack. Note that the above is just an example. You could create your own resolve_function and use whatever criteria you’d like to compose your derived class interface.

    The final code would look something like the following:

    
    #include <iostream>
    #include <type_traits>
    
    namespace FeatureDetails {
    
    template< typename T, typename = void >
    struct HasEnterpriseFeature : std::false_type {};
    
    template< typename T >
    struct HasEnterpriseFeature< T, std::enable_if_t< T().isEnterpriseFeature() > > : std::true_type {};
    
    // If we get to the last type, just return the last type.  The default
    // will just be whatever lands last in the parameter pack.
    template< typename FEATURE >
    constexpr auto resolve_function() {
        return FEATURE();
    }
    
    template< typename FEATURE_IMPL, typename FEATURE_IMPL2, typename... REST_OF_FEATURE_IMPLS >
    constexpr auto resolve_function() {
        if constexpr( HasEnterpriseFeature< FEATURE_IMPL >() ) {
            return FEATURE_IMPL();
        } else {
            return resolve_function< FEATURE_IMPL2, REST_OF_FEATURE_IMPLS... >();
        }
    }
    
    }
    
    struct EnterpriseFeatureImplementation {
        // Comment this out to make it not compile
        constexpr auto isEnterpriseFeature() { return true; }
        auto methodToUse() { std::cout << "Enterprise implementation!" << std::endl; }    
    };
    
    struct CommunityFeatureImplementation {
        constexpr auto isEnterpriseFeature() { return false; }
        auto methodToUse() { std::cout << "Community implementation!" << std::endl; }
    };
    
    template< typename... FEATURE_IMPLS >
    class Feature : private FEATURE_IMPLS... {
    public:
       using InterfaceMethodSelector = decltype( FeatureDetails::resolve_function< FEATURE_IMPLS... >() );
       using InterfaceMethodSelector::methodToUse;
    };
    
    using SelectedFeature = Feature< EnterpriseFeatureImplementation, CommunityFeatureImplementation >;
    
    int main( int argc, char *argv[] ) {
        SelectedFeature f;
        f.methodToUse();
        return 0;
    }
    

    I have a sample of the use of the method here, on godbolt.org, an interactive online C++ environment: https://www.godbolt.org/z/4QMmMo

    Be sure to tinker with the example on Godbolt. Notice that the general rule still applies with templates: If the template isn’t used, the compiler doesn’t generate code for it. Observe the advantages of composing classes this way — you don’t pay for what you don’t use.

    You might still be asking some questions, however. What if you wanted to choose multiple methods from different classes? What if you wanted to execute multiple methods from subsets of the different classes. We’ll address those in Part 2.

  • Everything just broke: HDF5 woes on Ubuntu 18.04

    I wrote a system for capturing data and placing it into tables using HDF5. The system ran for 6 years or so, almost every single night, through various updates and upgrades. Then one day, I took production systems and just opportunistically updated them because I had some time on my hands. (I know, I know. It’s terrible dev-ops, but I’m a rag-tag operation here. Just look at this blog.) Oddly, as of two weeks ago, with no code changes, my system started generating .h5 files that were unusable. I only detected this problem as of 3 hours ago as a result of getting some bad calculations out of a batch job.

    I’m left thinking it was some kind of library change or library dependency that broke. I actually don’t know the exact reason (yet) why things broke, but I am guessing either some 1.8 to 1.10 issue or some library dependency triggered this new problem. Either way, I don’t care that much — I just want access to my old data back.

    Here are the symptoms and when I knew I was in trouble:

    HDF5 1.10 on Ubuntu no longer let me read read-only archives of files that were being read from and used daily for years. h5ls failed. h5dump failed. hdfview failed. When I tried to use h5ls or h5dump, they both just simply complained that they were unable to open the older files.

    If you’re panicking about data being inaccessible (like I was two hours ago), I have a temporary work-around. (I will update this post if anything changes.) You need to drop back to the older version of hdf5 — and you need to build it from source.

    git clone https://bitbucket.hdfgroup.org/scm/hdffv/hdf5.git

    Then, once you’re in the hdf5 directory from the git clone:

    git fetch origin hdf5_1_8
    git checkout hdf5_1_8
    mkdir build
    cd build
    cmake -DHDF5_ENABLE_Z_LIB_SUPPORT=on -DZLIB_USE_EXTERNAL=off ..

    Then build. Once it’s built, you can ‘sudo make install’ and the default configuration will put the installation files in /usr/local/HDF_Group. Here, you’ll find a set of builds of tools like h5ls, etc. You will also find development headers and libraries you can link your own utilities to.

    If you set your linker and include paths properly and rebuild your utilities against this library, you’ll at least be able to get back into your data. This will gave you some time with your data before you have to ponder future storage needs. In my case, I’ll be using the older library to re-encode the last two weeks of data.

    Perhaps it’s time to try Kudu or Parquet? Who knows? In any case, I hope this helps random people who might be flipping out over data loss.

  • Informal Latency Benchmark for Redis GET/SET using redis_cpp

    (From the half-baked-benchmarking-department in conjunction with the I-should-stash-the-results somewhere department…)

    Benchmarking is serious business on the internet — you don’t want to misrepresent anyone and their hard work on any given product.  That having been said, sometimes I just want ballpark values for a “lazy” estimate on what kind of numbers I’d get with a crude attempt at using a particular product.

    My use case was simple — I have a complex calculation that takes tens of milliseconds.  Calculating it on the fly when needed is too slow given the scale of work involved.  I wanted to precompute the values once and store them in place.  I was curious how low-effort I could get if I just stashed my computation results in redis and then fetched them on demand via a get using cpp_redis (available here https://github.com/Cylix/cpp_redis)  I used some crude code (very crude — I didn’t dig deep and just copied some sample code in cpp_redis) that looks something like this:

     

        auto keyspace = CreateSampleSet( client );                                                                                                                          
        for( auto& key : keyspace ) {                                                                                                                                       
            struct timeval tv1, tv2;                                                                                                                                        
                                                                                                                                                                            
            gettimeofday( &tv1, NULL );                                                                                                                                     
            auto getreply = client.get( key );                                                                                                                              
            client.commit();                                                                                                                                                
                                                                                                                                                                            
            getreply.wait();                                                                                                                                                
            gettimeofday( &tv2, NULL );                                                                                                                                     
                                                                                                                                                                            
            int64_t sample1 = (tv1.tv_sec * 1000000 + tv1.tv_usec);                                                                                                         
            int64_t sample2 = (tv2.tv_sec * 1000000 + tv2.tv_usec );                                                                                                        
            measurements.push_back( sample2 - sample1 );                                                                                                                    
        }

     

    I’ll let the numbers speak for themselves, but this is the histogram I got out of the experiment.  I used an i7-3930k on a moderately loaded system with 64GB of RAM (mostly available.)  The configuration was just over a local socket, same machine, default cpp_redis configuration using tacopie (accompanies cpp_redis), connecting to localhost over port 6379.

    I tossed out the outliers.  There were a few, but they were a tiny fraction of the total number of outcomes.  The above distribution seemed like “typical” performance on my system.  The distribution didn’t change too much from run to run.

     

    In my use case, I will probably stick with stashing values in postgres and load everything up front at once vs get/setting on individual values.  However, it’s nice to know the cost of being lazy, should the need arise.  👌

  • ObjectMutator.js: Generating All (or some subset) of Combinations an Object at the Property (or Key/Value pair) Level

    Greetings, internet.  I’m in the middle of a project moving some old Qt stuff to the browser (using Javascript and React.js.)  The project isn’t all that exciting, but sometimes even the most boring projects can turn into delightful programming games and yield insights on where to increase productivity.  Case in point:  how a language’s type system can make a horrible problem much nicer in terms of programmer-time involved.  Oh, and I’m releasing code — in Javascript!  (The impatient can just go right here for the code.)

    The specific problem I was required to solve was coming up with a flexible way to generate numerous combinations of a given object.  The reason I needed all these combinations was two-fold:  for genetic algorithms and for brute-force testing back-end API endpoints.  (For the purpose of discussion, I’m using problems encountered while programming in C++ as a backdrop.  If you don’t know C++, just assume the struct is sort of similar to a Javascript object.)

    Framing the Problem

    Consider the following problem:

    You have an object with 3 members.  You initialize it to some value.  Now, you want variations of that object such that you want all possible combinations (or a subset of combinations)  for some (or even all) of the 3 members.   How do you do this?

    In probability and statistics, this problem is just a straightforward application of “The Counting Rule.”  So maybe in C++, you’d have some object “Foo” like:

    struct Foo {
        int a;
        float b;
        bool c;
    };

    Suppose Foo’s ‘a’ can take ranges between 0 through 4.  And let’s also say ‘b’ can take a range between 0 and 1 in increments of 0.2, and ‘c’ can be true or false.  To get all combinations of Foo with the mentioned constraints on ‘a’, ‘b’, and ‘c’, you would just program a set of nested for-loops, iterate over the combinations, and then add Foo to some list-like structure and use the combinations.  But what happens if another programmer comes along and changes your code so that it’s now this:

    struct Foo {
        int a;
        float b;
        bool c;
        std::complex<float> d;
    };

    Now what?  Well, if you want the existing program to work but also want to support the ability to generate all combinations of Foo including ‘d’, you’d have to add another loop to generate the next level of combinations for the set of values ‘d’ can take.  If this structure (‘Foo’) changes often in your code at the hands of many developers, then you have a nuisance on your hands in terms of maintenance.  It would be nice to generalize this behavior for a general object.

    Being able to “look inside” an object and figure out its composition is referred to as reflection.  People who’ve been using C++ for years probably recognize that, since the language hasn’t historically had good support for reflection, it’s often been painful to implement code that deals with objects in a very general way.  More specifically, it hasn’t been easy to look at the contents of a given object, enumerate the properties, and generate general code based on knowledge of those properties.  People have been working around these problems for years (with things like meta-object compilers, interface description languages, protobufs for message encoding, etc.)  The truth is that reflection is just not that easy in C++.  (In fact, this thorough and insightful article by Jackie Kay details the problem quite nicely.  While I haven’t looked into specifics, it does seem as if there is hope for the future.)

    During my porting project, the not-so-easy-to-reflect element of C++ essentially “went away” when I moved my code to Javascript.  While I was using Qt, not all of the objects I was using were Qt objects.  In fact, they were generated by yet another tool that had no awareness of the meta-object system.  For my needs, re-coding functions whenever an object’s composition  changed was quickly turning into a serious maintenance headache.  It was one of those times where I was happier to be using Javascript than C++.  Performance not being necessary, Javascript was a much more natural choice.

    Easy Mode:  Enter Javascript

    Javascript’s objects are effectively dictionaries.  That’s it.  You have an object.  You can enumerate the keys of that object, and those keys essentially tell you the members of the object.  So, given the following:

    let lala = { a: 1, b: 2, c: 3 }

    You can figure out all the members simply by:

    Object.keys( lala ); // Gives [ 'a', 'b', 'c' ]

    Obviously, this is a lot easier than having some kind of meta-object compiler or framework and fishing around for information about an object.  What I needed was a generalized way to enumerate properties and generate combinations of an object’s member variables.

    Since my first use case was a form of genetic algorithm, I’m going to discuss my solution in biological terms:

    Given some object (chromosome) (… e.g, { a: 0, b: 1, c: 2 }) , I wanted to be able to generate all versions (mutations) of the object (chromosome) where I could specify a bunch of values for ‘a’, a bunch of values for ‘b’, and so on and so forth until all combinations were generated.  I also wanted the ability to leave some properties alone or untouched (unmutated.)  Finally, given that there could be relationships between properties and that some combinations (invalid mutations) were not permissible, I also wanted a cull function to loop through the result set and get rid of invalid combinations (chromosomes).

    So for example (a very contrived example), given a simple object:

    {a:0,b:0,c:12}

    Suppose I wanted ‘a’ from range [0,2) and ‘b’ from [5,7), I would want results like this:

    To accomplish this, I wrote ObjectMutator.js.  Using ObjectMutator.js, I simply had to do this:

    let chromosome = {
        a: 0,
        b: 0,
        c: 12,
    };
    
    let mutationGroup = [
        { gene: "a", start: 0, checkRange: (i) => (i < 2 ), step: (i) => i+1 },
        { gene: "b", start: 5, checkRange: (i) => ( i < 7 ), step: (i) => i+1 }
    ];
    
    let mutationBag = new Mutator( chromosome, mutationGroup );
    console.log( mutationBag.mutations );

    The output is then (from VS Code’s debug console):

    Array(4) [Object, Object, Object, Object]
    index.js:64
    length:4
    __proto__:Array(0) [, …]
    0:Object {a: 0, b: 5, c: 12}
    1:Object {a: 0, b: 6, c: 12}
    2:Object {a: 1, b: 5, c: 12}
    3:Object {a: 1, b: 6, c: 12}
    

     

    To keep it simple, I didn’t use the ability to filter unwanted mutations.  For example, you could remove all mutations where the member ‘a’ was greater than ‘b’ by passing in a culling function to be used in a final filter pass (Array’s filter()).

    The code for Mutator is below.  I had three parameters:  a chromosome, a mutation group defining the range of mutations, and a cull function for filtering out invalid chromosomes as a result of the mutations (not used above).  I  took advantage of how Javascript objects worked and wrote some code to generate my mutations — below.  (Note that I am not as deeply versed in the pros and cons of various javascript approaches, so you’re welcome to provide optimizations and feedback in the comments.  I’d appreciate it!)

    
    class Mutator {
        /**
         * @constructs Mutator
         * @description Constructs mutator object
         * @param {chromosome} The base object that gets mutated
         * @param {mutationGroup} The set of (joint) mutations to apply to a chromosome
         * @param {cullFunction} The function passed to a filter to remove any invalid/unwanted genes in the mutation set
         */
        constructor( chromosome, mutationGroup, cullFunction ) {
            this.mutations = [];     
            this.chromosome = chromosome;
    
            this.assertGenesToMutatePresent( mutationGroup );
            function generateMutations( chromosome, mutationGroup ) {
                function internalGenerateMutations( mutationGroup ) {
                    let mutationGroupClone = mutationGroup.slice(0);
                    let frameGene = mutationGroupClone.shift();
                    
                    let nextGeneration = [];
                    for( let i = frameGene.start; frameGene.checkRange(i); i = frameGene.step(i) ) {
                        if( mutationGroupClone.length > 0 ) {
                            let subMutations = internalGenerateMutations( mutationGroupClone );
                            nextGeneration.push.apply( nextGeneration, subMutations.map( (submutation ) => {
                                submutation[ frameGene.gene ] = i;
                                return submutation;
                            }));
                        } else {
                            let mutation = {};
                            mutation[ frameGene.gene ] = i;
                            nextGeneration.push( mutation );
                        }
                    }
                    return nextGeneration;
                }
        
                let baseMutations = internalGenerateMutations( mutationGroup );
                return (baseMutations.map( (value) => {
                    return {
                        ...chromosome,
                        ...value
                    };
                }));
            };
    
            this.mutations = (typeof( cullFunction ) === 'function') ?
                generateMutations( chromosome, mutationGroup ).filter( cullFunction ) :
                generateMutations( chromosome, mutationGroup );
        }
    
        /**
         * @function assertGenesToMutatePresent
         * @memberOf Mutator
         * 
         * @param {mutationGroup} Mutation group that needs to be checked
         */
        assertGenesToMutatePresent( mutationGroup ) {
            mutationGroup.forEach(element => {
                if( !this.chromosome.hasOwnProperty( element.gene ) )
                    throw Error( `Missing gene ${ element.gene }` );
            });
        }
    };

    Hopefully, you find some utility in this blog post.  If you find any bugs or have pull requests, let me know.

  • My First Experience with Server-Side Swift (using Perfect from perfect.org) – Part 2

     

    In Part 1, I talked about why I chose Swift on the backend and my general overall experience using Perfect on the backend.  In Part 2, I want to talk about my experience using Perfect-Mustache and Perfect-Redis.  The gist of this post:  I hit some minor issues using Perfect-Mustache, but I was able to work around issues with small syntactic changes.  With Perfect-Redis, however, I hit some suspicious data corruption and, instead, had to switch to using Kitura’s (from IBM) Redis driver.   (I replicated the bugs and submitted bug reports.  If I manage to get some time, I will look into the implementation myself.)

    For the record, despite encountering bugs, I am successfully using Perfect for one particular backend service in production and have not encountered any issues.  Life on the bleeding edge of technical endeavors sometimes involves, well, … bleeding.  Sometimes the bleeding happens on the user end, sometimes the bleeding happens on the vendor side.  C’est la vie.

    Perfect Mustache

    Perfect-Mustache is supposed to perform most of the functions provided by Mustache.js.  (Mustache.js is sort of a nice logic-less templating system.)  For my application, I didn’t intend to do much server-side rendering of content; I tried to keep any content rendering strictly on the client side (using React.js.)  Still, I wanted to dabble in some server side rendering to see what I could get away with.  The current implementation of Perfect Mustache doesn’t faithfully replicate all the features of Mustache.  For example, I tried to use the {{.}} syntax to render elements from an array of strings and this was not supported.  By creating objects instead of strings and providing a key to reference a given string, things worked fine.

    My opinion:  If you end up using Perfect Mustache, the best way to go about using it is to look at the samples provided by Perfect and try to accomplish what you are trying to do by re-using examples provided by Perfect’s developer.  Usually the developers had good reasons for restricting functionality, or they encountered some hiccup dealing with data in one platform (Mac OS) vs another (Linux.)  In most cases, with Perfect, you can accomplish most of what you want by not venturing too far away from example projects.

    Perfect Redis

    Perfect Redis did some small things correctly for me, but did not work properly for my use case.  When I attempted to submit JSON-encoded objects to Redis, I immediately encountered problems.  The strings were not being encoded properly for JSON.  I had to write a Swift extension to encode the values properly on the way in.  (See GitHub pull request:  https://github.com/PerfectlySoft/Perfect-Redis/pull/7 — the developers accepted this.)  Things seemed to work, at first.  When I increased the size of the content submitted to Redis, I started getting crashes and corrupted data.  I submitted a bug request, but stopped pursuing the issue when I realized I had to dig further into talking to Redis than I had time for.  I filed a bug report and moved on.

    I should add that there were some other gotchas I encountered while working with Perfect’s Redis drivers.  For example, calling something like a listPrepend() (i.e. LPUSH) function means that the callback can execute at any time in the future.  Subsequent code can execute before the results of a callback are known.  The documentation doesn’t hint that this node.js-style (asynchronous) of programming is the preferred approach of programming (although I suppose this becomes obvious to the programmer when things stop working properly.)  This situation could be a bit of a surprise to the unsuspecting.

    Fortunately, I was able to drop in IBM’s Kitura Redis driver and was immediately able to get a functional solution.  Being able to swap drivers like this, I think, speaks highly of the current Swift ecosystem (at least with regard to constructing tiny web services.)  If all of Perfect doesn’t work for you, mixing and matching components from other areas in the Swift ecosystem works as a short-term fix.

    You’re not always left stranded on an island of broken code in the Swift ecosystem.  🙂

    Do I still plan to keep using Perfect and Swift on the backend?

    In some cases.  In my case, where it was advantageous to take advantage of existing Swift code, I think Perfect managed to do the job nicely.  The framework’s code is easy to follow and it took very little time to get a minimalist API up and running.  (In fact, I have a production service in Swift running now with almost no downtime and have been incredibly happy with the results.)  In terms of runtime performance, Perfect has been more than satisfactory.

    In terms of a general “which framework is best” sort of question, I think the verdict is still out.  I expect the Swift (and Perfect) ecosystem to advance leaps and bounds over time. However, a lot has to be said for the sheer amount of documentation and examples prevalent in other frameworks that have already become more mainstream.  As an example, as much as I prefer Swift to Javascript, the node.js ecosystem has evolved into a rather pleasant place to be with the sheer number of high quality projects out there.  The Swift ecosystem doesn’t currently offer quite as much, at least outside of Apple-centric product development.  There’s always trade-offs, and server-side Swift doesn’t feel hands-down-no-doubts definitively compelling — although much promise exists for the future.

    On a final note, I’d definitely like to see Swift play a stronger role on the backend.  I would encourage developers to try their hands at tinkering with this ecosystem and contributing when possible.

     

  • React.js: What’s this.props.children in JSX all about?

    If you’re expecting the second installment of the post on Server side swift, don’t worry.  I’m still documenting some things!

    Lately I’ve been toying with the idea of converting an old UI written in Qt to make it work with React, and React is new to me.  In React, the user interface is defined in terms of JSX, where JSX is this markup language that eventually transforms a user interface definition (of sorts) into usable JavaScript.  There’s just one minor issue:  Transpiling often obscures what goes on under the hood.  Normally, I learn things fastest by reading other people’s code; however, with React, I find it more beneficial to read other people’s code AND read what the JSX is transpiled into.

    In my case, I was a little bit confused by what was happening under the hood with regards to JSX and “this.props.children”, at least from skimming documentation and trying to crank out something quickly (at the expense of being thorough.)

    More specifically, I was curious what got generated under the hood when a situation like the following was encountered:

    <SomeReactComponent>
        <SomeChildReactComponent/>
        <SomeChildReactComponent/>
    <SomeReactComponent />

    Normally, after transpiling (via Babel), each React component gets created via React.createElement().  If there were multiple child nodes (like in SomeReactComponent above, where there are two “SomeChildReactComponent” elements), how did react actually handle the element creation?  The documentation and several examples say to use {this.props.children}.  But what is actually happening here?

    I figured I’d construct an example and transform it via Babel to see what it looked like. I have an EncloserApp top level component. Inside the top level component, I created an Encloser. Encloser then “encloses” EnclosedElement. The objective was to just see what the transpiled output looks like:

    class Encloser extends React.Component {
        render() {
            const borderedStyle = {border: "1px solid red", padding: 6};
            return (
                <div style={borderedStyle}>
                {this.props.children}
                </div>
            )
        }
    }
    
    class EnclosedElement extends React.Component {
        static getPropTypes() {
            return {
                "custom_string_property" : React.PropTypes.string.isRequired
            }
        }
        render() {
            return (
                <h1>{this.props.custom_string_property}</h1>
            )
        }
    }
    
    class EncloserApp extends React.Component {
        render() {
            return (
                <Encloser>
                    <EnclosedElement custom_string_property="String One"/>
                    <EnclosedElement custom_string_property="String Two"/>
                </Encloser>
            )
        }
    }
    
    const contentNode = document.getElementById( 'contents' );
    ReactDOM.render( 
        (
            <div>
                <EncloserApp />
            </div>
        )
        , contentNode
    );

    In the transpiled code, as expected, the top level element is created as such:

    var contentNode = document.getElementById('contents');
    ReactDOM.render(React.createElement(
        "div",
        null,
        React.createElement(EncloserApp, null)
    ), contentNode);

    The createElement() call creates the div, as expected, passes along no properties (null), and then creates a singular child node for the EncloserApp.  Now, inspecting EncloserApp, the code (around the render() method) looks like this:

     

    _createClass(EncloserApp, [{
            key: "render",
            value: function render() {
                return React.createElement(
                    Encloser,
                    null,
                    React.createElement(EnclosedElement, { custom_string_property: "String One" }),
                    React.createElement(EnclosedElement, { custom_string_property: "String Two" })
                );
            }
        }]);

    So, essentially, {this.props.children} is transformed into a list of child nodes that are passed as parameters to createElement (which accepts a variable number of arguments.  “this.props.children” exists in JSX specifically for JSX markup that exists as an opening and closing tag (at least according to the documentation.)

    I know this is not an earth shattering blog post, but it helped me to see the nature of the transformation underneath.  The system makes more sense seeing the transformation of JSX into the graph containing React component nodes.

    Ah, clarity.

  • My First Experience with Server-Side Swift (using Perfect from perfect.org) – Part 1

     

    Motivations

    Most of the work I take on involves languages like C++, Swift, or Objective-C and is not typically web based.  While I’m normally nestled comfortably in the embrace of C++ template meta-programming or the warm walls of an operating system kernel, even I can sense the uncivilized barbarians at the gate (Javascript) and the increasing demand that software work more closely with web based APIs and various web stacks (Django, Rails, node.js.)

    Recently, I had the interesting encounter where I was given the chance to explore the use of server-side Swift.  The client was attempting to ferry computational requests from a number of web and iOS-based clients to a very large, computational backend written in C++ (complete with its own ORM)  that was distributed across a number of nodes.  Since one of the core libraries used happened to be written in Swift, there was some hope we could simply use code on the server side without much effort.  Rather than compile a separate application and marshal requests to the Swift library through yet another framework, the decision was made to just service requests outright from a web service authored in Swift.

    Instead of authoring web services from scratch, I looked for a ready-made solution to simply embrace and extend.  There were a few different frameworks to choose from, but I eventually settled on Perfect (from http://www.perfect.org).  The decision to use a server-side swift framework, given other mature technologies, was not without internal friction.  Criticism was leveled that I was using untested “hipster” technology.  In fact, some of my colleagues insisted I go so far as to don a beret, head to the local hipster coffee shop, and offer poorly authored poetry to the public about being oppressed by self-authored technical debt!  Despite critics’ concerns, I want to highlight that I had some success with Perfect!  (Also, not enough technical debt was accumulated to warrant lobbing harangues-inappropriately-called-poetry at the public.)  Of course, the effort was not completely glitch free — and that is part of what I will document here.

    Why Perfect?

    Why Perfect?  My choice was based on a mix of inputs:  GitHub commits, looking into public issue trackers, various blogs, popularity, and out of the box features.  I watched a video on YouTube from the CEO of the organization responsible for Perfect and thought his developers’ take on things aligned with my own views.  Moreover,  in my particular environment, I needed connectors for Redis and Postgres.  The support for web sockets and the basic documentation gave me enough confidence in the product to give it a shot.  If Perfect failed me, I figured, it would fail me quickly and in an obvious enough fashion that I could bail without disastrous results.

    For our internal use case, some “jankiness” in the tech stack was tolerable provided we got results often enough between failures.  I was willing to fix bugs, provided they weren’t too deep rooted.  Basically, Perfect really needed to be “good enough” — and for our use case, it was.

    The Use Case

    Our use case:  Accept work requests from web, desktop, or mobile clients, perform some transformations and filtering on those requests, service those requests, or (in the case of computational expensive requests) dispatch those requests to a work queue (Redis).  Binaries written in C++ would then consume elements from Redis and place their results back in a database (Postgres) or, in some cases, Redis.  As the computational backend progressed on its work requests, updates were provided via WebSockets (also via Perfect) to web based clients.

    Well, did you succeed?

    I did!  In fact, the general architecture of Perfect facilitated our needs quite well.  I would say the initial prototyping and deployment of a basic service went very smoothly.  The transition from Swift 3 to Swift 4 even went smoothly — the developers responsible for Perfect handled this transition in a very timely fashion. There was no extended period in which I had clients on one version of Swift with the server on another version of Swift.

    Routes, URL Variables, Responses

    In Perfect, it is very easy to create a server, add routes, and service requests.  No particularly advanced knowledge of Swift is really required to accomplish this.  Given my experience with other frameworks, I did not encounter too much difficulty just getting up and running.  The basic process is to set up a route, pass a function parameter to handle the route, and reply using response objects.  Most of the basic web primitives for handling URL variables, posted parameters, and managing responses are provided.

    Implementing an API is simple.  Return the right response type, format your response with the proper encoding, and write the response.  I hit no major issues or bugs in performing these basic tasks.

    Encouraging!

    Using Swift on the Server Side in Linux

    Swift on the server-side does have some gotchas — this isn’t Perfect’s fault, but it affected me in that, if I developed code in Xcode and then recompiled and deployed on Linux, some small snippets of code would not compile.

    In addition, the Foundation libraries, while functionally complete, have some corner cases where things can get awkward.  One (cosmetic) thing I found somewhat unusual was the use of NSRegularExpression.  OS X and iOS developers will normally see “NS” namespace objects, but this felt slightly awkward on Linux.  Moreover, NSRegularExpression (and affiliated functions) is less pleasant to use than regex facilities in other languages.  (Obviously, this is an opinion; but I suspect many readers who’ve used other frameworks would agree with me.)

    During the transition between Swift 3 and 4, I hit some snags with certain function signatures changing.  In some cases, my code was littered with preprocessor defines like #if os(OS X) to tweak behavior on different platforms.  I didn’t love this, but it wasn’t too difficult to work around and was only a small drawback.

    Another issue I encountered was doing semi- low-level tasks — I ended up needing third party libraries to do things like atomically incrementing integers.   I found this somewhat distasteful, but not unreasonable given the nature of how Swift has been evolving and changing over the past few years.  Some primitives available on OS X were not available on Linux and suggestions on Stack Overflow seemed clunky.  I ended up using a third-party library that made effective use of features provided by Clang.  Performing some systems-software tasks in Swift still feels awkward compared to using C or C++.  This could be my own experience, but it certainly felt awkward and I’d caution others who have these same scenarios.

    Interfacing to Postgres

    I did not use the ORM (stORM) provided with Perfect.  I did, however, issue SQL requests via their Postgres package.  I think, in many languages and frameworks, authoring SQL requests is a bit of a pain.  It wasn’t any different in Swift while using Perfect.  I’m not sure there’s a way around painfully constructing strings representing complex queries, but there’s no immediately perceptible or dramatically noticeable edge in Swift for performing this task.

    One issue that did cause me to raise an eyebrow was a bug filed in the Perfect issue tracker regarding memory leaks on tables returned from the Perfect driver built on top of libpq.  My use case did not stress the Postgres driver enough for me to experience anything catastrophic with regard to using Postgres.  At the time of writing for this blog post, this issue has not been closed.  This could be of concern for long running services issuing a very large number of requests.  Hopefully the developers will address this.

    What’s Coming in Part 2

    In part 2, I’ll detail a few issues I hit in Perfect-Mustache, issues I encountered in the use of Perfect-Redis, and other architectural gotchas that I encountered.  The issues I touch upon in Part 2 have more substance.  Stay tuned!

     

  • Beat Detection: A Raspberry Pi Hacking of Hallmark’s “Happy Tappers”

    In graduate school, time series analysis was the topic I liked the most. The classic models (like AR(p), MA(q), ARMA(p,q), etc.) are used usually after time series have been sampled in some fashion. Of course, there’s more than one way to look at a time series and many of those perspectives come from the field of digital signal processing. Unfortunately, a lot of the text books on DSP are dry. One can spend hours reading a DSP book, understand the math, and still not really appreciate the material. I happen to think the appreciation for the topic comes more from trying to solve real-world problems. As problems are encountered, the motivation arises to go back and attack the mathematics in a meaningful sense. One of the more interesting problems, in my opinion, is real-time beat detection in music.  Therefore,  I decided to do some experimentation with beat detection.

    After Christmas, I went through pharmacy after-Christmas sales and picked up a bunch of cheap Hallmark ornaments. When I discovered the ornaments had a port to interface to each other, I decided I’d look at the interfacing method and devise my own schemes for controlling them. After figuring out the communications protocol between ornaments, I experimented by building control systems using FPGAs, Arduinos, and the Raspberry Pi.  I ultimately settled on the Raspberry Pi.

    With the Raspberry Pi, I was able to perform FFTs fast enough on the music samples to make a crude attempt at detecting beats while playing mp3 files. The project is still something I tinker with from time to time. I find entertainment in looking at various approaches for beat detection in research papers and documents on the internet.  Below is a sample of what I have been able to create so far, which is a hack of Hallmark’s Happy Tappers, set to a clip Capital Cities’ “Safe and Sound”:

    As precious free time becomes available, I’d like to improve the system to work with onset detection and explore various filtering problems.

    Finally, I’d like to thank the Dishoom Labs crew (MB) for letting me borrow some equipment during the project.

    (Edit:  Removed Flickr Video and replaced with YouTube.)