Monday, July 30, 2018

2018-07-30 Monday - Open Offices - Bad for High Performance Employees

  • "Oxford Economics conducted a survey of more than 600 executives and 600 employees to better understand what works for employees—and what doesn’t—about open-plan layouts,"
  • "... results show that threats to productivity and worker peace of mind are bigger issues than most executives realize..."

5-part article series... 
  • See Part-2 for this quote: "58% of HPEs need more private spaces for problem solving, and 62% of HPEs find their office environment “too distracting.” (4,000+ respondents) 
  • "Ethan Bernstein and Stephen Turban, at Harvard Business School and Harvard University, took a look at people who switched from individual cubicles to an open office plan. What they found wasn't more collaboration after the switch but less. Participants in the study spent 73 percent less time in face-to-face interactions 67 percent more time on email 75 percent more time on instant messenger"

  • "Some workplace designs are more about cost-cutting than collaboration"

Saturday, July 28, 2018

2018-07-28 Saturday - A Gentle Rebuke to Martin Fowler, re: Software Architect

After watching this brief talk given in 2015 by Martin Fowler at the O'Reilly OSCON conference, I have a gentle rebuke:

First, let me say that I greatly admire Martin Fowler - and have told him so, in person, during a QCon in San Francisco some years ago - and that he is a great inspiration to my own personal growth in this field.

But, I do have a few questions - that I think are worth considering as a counter-point to his views expressed in this talk - which I do not think quite adequately consider the matter:

The implicit view that architects who do not code (as their primary activity, or no more) are of no value - and that the code is the essential truth of 'The Architecture' seems to me to be ignoring the following concerns:

1. While AS-IS may very well be expressed in code (to a great extent) - what of the TO-BE, which may well need to be charted and communicated over a multi-year arc of intent, effort, and funding by the business - in coordination with dozens, hundreds, or thousands of third-party partners?

2. What of communicating an understanding of those aspects of the design, which may need to be shared and communicated with third-parties - such as in the case of major (and complex) integration efforts? Will you just give those (potential competitors) the code?

3. What of communicating an understanding of those aspects of the overall system - some of which may not be implemented in your code - and which may rely upon other systems for which you do not have access to the source code (or which may be hosted by Cloud SaaS providers)?

4. What  of communicating an understanding of those aspects of the overall system - some of which may not be implemented in code - and which may be implemented as manual processes - but are still critical elements to understanding the true scope of the overall system?

As Randy Shoup notes in his talk "Moving Fast at Scale" - "Sometimes we solve those problems with code" [emphasis, mine] - which begs the question, Martin: How do you document those things that are not in code, if not with diagrams?

5. What of the oversight and governance functions that should be tended to - as systems must be managed through a life-cycle - which must include a clear understanding of the architectural implications when sun-setting, replacement, or rationalization choices must be considered, researched, choices evaluated, recommendations prepared, etc. - are architects who spend their time coding - and not keeping a weather eye on these concerns - really making the best use of their time for the business they are serving?

6. Whether an architect codes (as their primary activity, or no more) - and whether there is value in an architect who no longer codes - seems to me to be primarily a question of scale. In a multi-billion dollar business - in which an architect may be responsible for the oversight of dozens of applications, within one or more given business lines (of which there may well be dozens) - they are typically required to closely monitor, align, and coordinate the architectural road maps that span multiple business lines - usually involving multiple organizational units, external third-party partners, and impacting potentially hundreds of applications - over multi-year staged delivery planning of capabilities.  Is it really feasible (or optimal to the business) to have someone in such an architect role - being buried in the minutiae of coding?

7. Does the architect of a major commercial building, a skyscraper, a new aircraft, a bridge, or a new automobile design spend their time pouring foundations, laying bricks, running wiring through conduit, riveting sheet metal, on the assembly line, etc.?  Would any professional architect worth their salt sneer at the very thought of being expected to document the design of such endeavors? Or would they simply say, pop the hood and disassemble the engine if you want to understand how it works? What of the systems that approach (or exceed) 1M+ lines of code - and which may exist in an ecosystem of a multitude of other systems that are of similar size and complexity? Is it feasible to require people to just read the code to understand one of those systems - or the potentially intricate interactions between them?  Does your advice of a 'just a few diagrams' still hold?

For these reasons, we should be open to embracing - across a continuum of diversity of possible manifestations - the possible interpretations of the role of Software Architect (or even, Enterprise Architect) -  and not be disparaging - and certainly not promote the use of such derisive terms as 'Architecture Astronaut'.

Meeks' Software Architecture Conjecture #1:
The source code may (or may not) be a full implementation of the desired capability needed by the business - but is more likely just an approximation (constrained by permitted time, allocated budget, and available skills/talent of the team involved). Therefore, it should not be confused with the actual or desired (or envisioned) design - that may require multiple years to achieve - of which the current source code may only reflect a partial (and incomplete, or inaccurate) representation.

Thursday, July 26, 2018

2018-07-26 Thursday - Resources for ADA Compliance Initiatives

I happened across this government 'Americans with Disabilities Act' (ADA) web site this evening - if you have an ADA initiative planned, this might be of interest to you:

 ADA Best Practices Tool Kit for State and Local Governments

I also keep a list of additional resource links: 

Monday, July 16, 2018

2018-07-15 Sunday - Orchestration vs Choreography

I've often been lax in my choice of using the terms Orchestration or Choreography.

This evening I found a useful explanation of the distinction between the two, in this Wikipedia article on BPEL:
"BPEL is an orchestration language, and not a choreography language. The primary difference between orchestration and choreography is executability and control. An orchestration specifies an executable process that involves message exchanges with other systems, such that the message exchange sequences are controlled by the orchestration designer. A choreography specifies a protocol for peer-to-peer interactions, defining, e.g., the legal sequences of messages exchanged with the purpose of guaranteeing interoperability. Such a protocol is not directly executable, as it allows many different realizations (processes that comply with it). A choreography can be realized by writing an orchestration (e.g., in the form of a BPEL process) for each peer involved in it. The orchestration and the choreography distinctions are based on analogies: orchestration refers to the central control (by the conductor) of the behavior of a distributed system (the orchestra consisting of many players), while choreography refers to a distributed system (the dancing team) which operates according to rules (the choreography) but without centralized control."

Saturday, July 07, 2018

2018-07-07 Saturday - CPU vs GPU for Machine Learning Performance
"Intel has been reported to claim that processing in BigDL is “orders of magnitude faster than out-of-box open source Caffe, Torch, or TensorFlow on a single-node Xeon processor (i.e., comparable with mainstream GPU).

TensorFlow* Optimizations on Modern Intel® Architecture
"TensorFlow benchmarks, with CPU optimizations added, see CPU performance gain as much as 72X"

A paper presented during the 2017 International Conference on Machine Learning (ICML)

  • Deep Tensor Convolution on Multicores
    • "...Another important reason to look at CPUs is when batch size is 1, as may be the case in Reinforcement Learning, where it is not worthwhile to move data between CPU and GPU." 
    • "Deep convolutional neural networks (ConvNets) of 3-dimensional kernels allow joint modeling of spatiotemporal features. These networks have improved performance of video and volumetric image analysis, but have been limited in size due to the low memory ceiling of GPU hardware. Existing CPU implementations overcome this constraint but are impractically slow. Here we extend and optimize the faster Winograd-class of convolutional algorithms to the N-dimensional case and specifically for CPU hardware. First, we remove the need to manually hand-craft algorithms by exploiting the relaxed constraints and cheap sparse access of CPU memory. Second, we maximize CPU utilization and multicore scalability by transforming data matrices to be cache-aware, integer multiples of AVX vector widths. Treating 2-dimensional ConvNets as a special (and the least beneficial) case of our approach, we demonstrate a 5 to 25-fold improvement in throughput compared to previous state-of-the-art." 

2018-07-07 Saturday - Researching BigDL Machine Learning Framework

Preparing my reading list to do a deep dive in BigDL:




2018-07-07 Saturday - fastText for Text Classification

I'm doing some focused reading this weekend to investigate the relative performance of Machine Learning frameworks leveraging GPU vs CPU implementations - and whether there are cases in which a distributed CPU approach may have an advantage over a GPU approach. 

 This 2016 paper (using fastText, for text classification problems) by a Facebook AI Research (AIR) team (Armand Joulin, Eduourd Grave, Piotr Bojanowski, Thomas Mikolov) achieved some startling results that may be of interest to others.

Bag of Tricks for Efficient Text Classification
"This paper explores a simple and efficient baseline for text classification. Our experiments show that our fast text classifier fastText is often on par with deep learning classifiers in terms of accuracy, and many orders of magnitude faster for training and evaluation. We can train fastText on more than one billion words in less than ten minutes using a standard multicore~CPU, and classify half a million sentences among~312K classes in less than a minute."
"FastText is an open-source, free, lightweight library that allows users to learn text representations and text classifiers. It works on standard, generic hardware. Models can later be reduced in size to even fit on mobile devices."

Implementing Deep Learning Methods and Feature Engineering for Text Data: FastText

2018-07-07 Saturday - Exploring Scala.js

Writing client-side web applications in Scala

Monday, July 02, 2018

2018-07-01 Sunday - Weekend Research and Experimentation

Spent quite a bit of time this weekend experimenting with the latest release of Docker (18.05.0-ce)

Upgraded to the latest version of Vagrant (2.1.2) - will spend some time in the coming week diving back into it.

I've also spent some time reviewing Charles Betz textbook that he's helped author, that he uses to teach a Masters' level course in IT Delivery:

An interesting quote, via Twitter, [re: Why Quantum Computing matters]... 
"Modeling the caffeine molecule requires around 2^48 bits, which is around 1-10% of the # of atoms in the earth, but only 160 qbits" - @marco_pistoia (Master Inventor at IBM's T. J. Watson Research Center.) at #FOSD #QuantumComputing

Another programming language to add to my backlog of innovative ideas to explore: The Dark Programming Language:

These concepts are worth adding to your search filters for news about emerging trends in software development innovation:
  •  "neural program synthesis"
  • "machine learning + code" 
As part of a deep dive I'm doing to prepare to take some of the AWS Certification exams,  I spent hours Saturday evening reviewing and updating my Kindle with over 90+ free books that Amazon AWS makes available as part of their documentation of their cloud computing services and products


© 2001-2021 International Technology Ventures, Inc., All Rights Reserved.