For those tasked with network threat detection and response, network flow records are widely held as a valuable data source. Yet many organizations lack the capability to effectively use a complex data source like network flows, which can generate billions of records in a day. Deriving insight from network flows requires not only the ability to store and process enormous quantities of data, but also the transformation of the data into forms better suited for data analysis. Without compelling visualization of these results, the insight may still remain hidden. If the data does not enable the practitioner to “tell the story,” the insight may never have the impact to effect needed change. To increase the insight that network flows could provide, our research team took a path borrowed from scientific investigations of climate and neuroscience. A solution for network flows is a solution at scale, and requires deep thinking in minimizing data footprint and speedup from parallel processing. This approach required transforming the data into time series and graph structures, then performing both classical scientific data analysis, such as principal component and Fourier analysis, plus more recent methods involving probabilistic graphical models. These techniques provided new ways to visualize network flow data and increase the potential for insight. The steps in our approach are intended to help the practitioner “tell the story” in network flow data. In this talk and supporting paper, Grant shares some key learnings and outline our approach, largely based in open source software.