2012-12-30

2012-12-30 Sunday - NFRs

Non-Functional Requirements (NFRs) are often poorly documented, and there is often ambiguity that exists in the definitions of terms used by different team members.

Often teams approach NFRs using a boil-the-ocean approach - in which case, all NFRs are treated equally (at best) - or ignored (at worst) by the developers en masse due to the sheer volume of NFRs specified.

Today I happened across a paper that was published in March 2012 as part of the
20th IEEE International

Requirements Engineering Conference

September 24th-28th, 2012. Chicago, Illinois, USA.

http://crisys.cs.umn.edu/re2012/

NON-FUNCTIONAL REQUIREMENTS IN SOFTWARE ARCHITECTURE PRACTICE
Report ESSI-TR-12-1
Departament d’Enginyeria de Serveis i Sistemes d’Informació
 http://upcommons.upc.edu/e-prints/bitstream/2117/15716/1/da-ca-jc-xf-report%20essi.pdf
by
David Ameller
Claudia Ayala
Xavier Franch
Software Engineering for Information Systems Group
Universitat Politècnica de Catalunya (GESSI-UPC)
Barcelona, Spain

and
Jordi Cabot2
AtlanMod
INRIA - École des Mines de Nantes
Nantes, France

A slide presentation is also available:
http://modeling-languages.com/how-do-software-architects-deal-with-non-functional-requirements/


A few interesting quotes from the paper:

  • "Our interviews show that in 10 out of the 13 projects considered, the software architect was the main source of the NFRs."
  • "Architects did not share a common vocabulary for types of NFRs and showed some misunderstandings."
  • "The two most important types of technical NFRs for architects were performance and usability. On the other hand, architects considered non-technical NFRs to be as relevant as technical NFRs."
  • "Inability to interpret some particular term, e.g., “availability”, “accuracy”, and “sustainability”, requiring additional explanations from the interviewers"
  • "Use of a term with an incorrect definition. We found a serious confusion in the answer, e.g., “Maintainability is very important, because when something is working, we can’t make changes”

Other Resources Mentioned in the Paper
Volare Requirements Specification Template
http://www.volere.co.uk/template.htm
ISO/IEC 9126 Software engineering - Product quality
http://en.wikipedia.org/wiki/ISO/IEC_9126


Also see:

ISO/IEC 9126 in practice: what do we need to know?
http://www.essi.upc.edu/~webgessi/publicacions/SMEF%2704-ISO-QualityModels.pdf

ISO/IEC 25010:2011
http://www.iso.org/iso/home/store/catalogue_ics/catalogue_detail_ics.htm?csnumber=35733

Other Web Resource Links
http://en.wikipedia.org/wiki/Non-functional_requirement





2012-12-12

2012-12-12 Wednesday - Windows MAX_PATH?


I love to create organization from chaos.

In my consulting practice, I have a very organized approach to creating a directory structure for each client engagement, and within an engagement - a very organized sub-directory layout so that I can quickly find things long after I've filed them away.

Recently, while trying to help re-organize the logical directory structure of a client's Sharepoint site for a project - I hit a puzzling error message:

[click to view]


Without intending to be too verbose, my relatively few nested directory names exceed a Sharepoint limitation. I had simply provided useful and meaningful directory names - and organized them in such a way as to facilitate quickly finding information.  However, some inherent implementation constraint in the design of Sharepoint results in an arbitrary maximum length constraint for an URL+filename (260 characters) and a further maximum length constraint on a file name (128 characters).

While one can understand a limit for a file name or a directory path in the physical file system - it just seems strange to have an arbitrary restriction on how deeply one can nest a logical 'directory' structure within a content management system.

In doing a quick Google search, I came across the following article that may be of interest to others, as it mentions a MAX_PATH with a default of [260]:
http://msdn.microsoft.com/en-us/library/aa365247.aspx

Maximum Path Length Limitation

In the Windows API (with some exceptions discussed in the following paragraphs), the maximum length for a path is MAX_PATH, which is defined as 260 characters. A local path is structured in the following order: drive letter, colon, backslash, name components separated by backslashes, and a terminating null character. For example, the maximum path on drive D is "D:\some 256-character path string" where "" represents the invisible terminating null character for the current system codepage. (The characters < > are used here for visual clarity and cannot be part of a valid path string.)

Seriously Microsoft? You designed an API that exposes a physical constraint of your file system design - and applied it to the web?

I suspect that if Sharepoint were designed with something like the JSR 170 approach (Content Repository API for Java) - then this type of limitation would not be an issue.

Apache Jackrabbit is an implementation of JSR 170


2012-12-16 Sunday Update:

Apparently there is another manifestation of this constraint in the Microsoft ASP.NET implementation:

HttpContext.Current.Server.MapPath fails for long file names

Back in 2006, some empirical findings were published on Boutell.com that are worth noting: http://www.boutell.com/newfaq/misc/urllength.html

...as you will note, 2,000 would appear to be a minimally supported length across the various browsers and servers that were tested.

Per Microsoft Support, IE supports a max. URL length of 2,083
http://support.microsoft.com/kb/208427 
 

 ...so I guess it is just the design and implementation of Sharepoint.

  

2013-03-24 Sunday Update

You will also run into this issue if you are still using xcopy

http://www.terminally-incoherent.com/blog/2007/02/05/xcopy-insufficient-memory/  

 


2012-12-08

2012-12-08 Saturday - ISO 3166 Country Codes Utility Class


Last night I was looking for a small coding task that could be completed in an evening - that would allow me to add something of value to a personal utility library - and decided to work on a robust class to handle a variety of tasks related to ISO 3166 Country Codes.

[another one of my side projects is the implementation of a utility library of the United States Postal Service Pub 28 (Postal Addressing Standards) - of which this Country Code utility class will be of some re-use]

Two useful wikipedia.org page references:

The utility class I'm working on provides a number of functional capabilities:
  • Stores the following Country Code Information:
    • Short English Name
    • Two Character Alpha Code (Alpha-2)
    • Three Character Alpha Code (Alpha-3)
    • Three Character Numeric Code (Numeric-3)
    • Top Level Domain (TLD)
  • Stores the Country Code data in a multidimensional array for most of the primary functional processing 
  • Stores a Key-Value entry in a HashMap to allow searching by the Short Name, as well as by the following codes: Alpha-2, Alpha-3, Numeric-3, and TLD
  • Utility method to generate an XML structure of the primary array information
  • Utility method to generate HTML select snippets codes for  either Apha-2, Alpha-3, or Numeric-3 value codes.  A parameter allows the developer to specify which Country Code to set as the 'selected' value.  The method determines which type of code was passed via inspection of the characteristics of the single parameter.
  • Utility method to generate the XSD Enumerations for the Alpha-2, Alpha-3, and Numeric-3 Country Codes.
  • Utility method to dump the contents of a Country Code array
  • Utility method to dump the contents of a HashMap - which is searchable by all possible Key-Value combinations
  • Utility method to query the length of the array
  • Utility method to query the size of the HashMap

I've posted  files for the generated HTML select samples on github:gist
I've posted a github:gist file for the XML snippet of the enumerations:








2012-11-21

2012-11-21 Wednesday - Architecture Book Recommendations

Recommended Books for Architects

I realized tonight that the links to the books I recommended in this post were incorrect / broken, so I've updated them:
http://intltechventures.blogspot.com/2012/05/2012-05-17-thursday-recommened.html

...while there are a few overlapping suggestions, this earlier post may also have some suggestions that may be of interest:
http://intltechventures.blogspot.com/2008/12/2008-12-29-sunday-recommended-book-for.html

2012-11-18

2012-11-18 Sunday - Graphing and Plotting in Python


I'm spending some time this week researching various plotting and graphing solutions in Python.

The first package for investigation: matplotlib
http://matplotlib.org
https://github.com/matplotlib/matplotlib

My second choice for further investigation will be: MathGL
http://mathgl.sourceforge.net/

other solutions to possibly investigate later:
http://wiki.python.org/moin/NumericAndScientific/Plotting


2012-11-28 Update:
I just came across NetworkX
"Python language software package for the creation, manipulation, and study of the structure, dynamics, and functions of complex networks."
There are some additions available on the NIST Applied and Computational Mathematics Division web site [Network Modeling Software] that includes the following:

2012-11-17

2012-11-17 Saturday - Scott Young's MIT Challenge

Scott H. Young set a goal of 1-year to learn the entire 4-year MIT curriculum for computer science, without taking any classes.

http://www.scotthyoung.com/blog/mit-challenge/

...he completed his goal on September 26, 2012, just under 12 months after beginning October 1st, 2011.

Regardless of any question of qualitative aspects of his acquired/retained knowledge - the idea of challenging yourself to cover such a breadth of material (whether on a compressed timeline or not) - cannot help but to improve you in so many ways:

  • A forced-march refresh of material you may not have covered in a long time
  • A forced-march exposure to potentially new material that wasn't available to you before
  • A real stretch of your learning habits
  • A way to go back and fill in (or reinforce) concepts that you may not have solidly internalized before
  • A way to challenge you to achieve greater efficiency in how you acquire new knowledge / and assimilate and organize that knowledge

2012-11-17 Saturday - Software Modeling and Design

Over the last year or so I've been aggressively adding to my already extensive personal library (1200+ books) by visiting many thrift stores [frequently], and in particular, seeking out Friends of the Library type book sales.  Often adding 20-50 books in a weekend (often, for less than $20-$40 total).

For example, here's the 'treasure' from my last three book hunting expeditions:


Note: A very useful resource for finding local book sales: http://www.booksalefinder.com
[in case you may not be aware, supporting your local thrift stores is one way to directly help support local job creation in your neighborhood]

Since my reading interests are quite broad - the variety of books I've picked-up have spanned many different disciplines.  But, I'm always on the lookout for an interesting text that has some connection to my professional life as a solution architect.

A recent such acquisition: Hassan Gomaa's excellent  'Designing Concurrent, Distributed, and Real-Time Applications with UML' (published in 2000) - which I picked-up for about $1.00

NOTE:
Dr. Hassan Gomaa,
Department of Computer Science
George Mason University
http://mason.gmu.edu/~hgomaa/
http://mason.gmu.edu/~hgomaa/CourseSlides.html

I consider myself an advanced practitioner of UML - but often use a limited subset in my day-to-day architecture work.  I enjoyed the opportunity to do a refresher on some of the less-frequently-used aspects - and in particular, the concerns that are relevant to real-time modeling - by diving into Gomaa's book.

Most UML books tend to use trivial examples - and rarely spend much time on the real-time UML modeling aspects.  At 700+ pages, this is an excellent text for junior level developers to significantly deepen their UML modeling skills.

I must say that the book is well written and has weathered the passage of time quite well.





In fact, I've been so pleased with Gomaa's writing that I've added his 2011 follow-up text, 'Software Modeling and Design: UML, Use Cases, Patterns, and Software Architectures' to my future purchase list.


2012-11-11

My Review of Think Python


A Concise Intro to Python Programming
By IT_Voyager from Ventura, CA on 11/11/2012


4out of 5
Pros: Concise, Well-written, Helpful examples, Easy to understand, Accurate
Best Uses: Novice, Student
Describe Yourself: Solution Architect
Full Disclosure: I obtained a free copy of this book as part of the O'Reilly Blogger Review program.

Allen B. Downey's recent release (from O'Reilly) - 'Think Python' is an excellent example of how an introductory programming book should be crafted.

Clear, concise, entertaining, insightful, crisp, useful - these are some of the words that come to mind while reading this book.

There is good coverage of some of the differences between Python 2 and 3.

This is an excellent text for the novice programmer to learn Python - providing a general purpose overview of the language. The interested reader will find enough learning
traction within this book to more easily proceed to more advanced texts.

Programming concepts are gradually introduced, with successive layers of refinement adding further understanding of more complex programming concepts.

At the end of each chapter are suggested exercises to further deepen the reader's grasp of the concepts just presented.

The inclusion of links to codes samples and solutions at the http://thinkpython.com site is a nice touch.

While this book provides a very light overview of some essential software design concepts (Functions, Encapsulation, Generalization, Recursion, Inheritcance, Polymorphism), the reader of this book should plan to further enhance their understanding with supplemental books to cover deeper functional programming concepts as well as deeper understanding of class design and object oriented concepts.

It is notable that although this book certainly fits into the classification category of introductory - the coverage includes an uncommon attention to such important matters
as debugging and analysis of algorithms. As an additional bonus, Appendix C provides a discussion of Lumpy ("...to examine the state of a running program and generate object diagrams...and class diagrams) - which is included in the Swampy code discussed early in the book.

2012-10-05

2012-10-05 Friday - Recent Interesting Finds...


I recently started following @highscal on Twitter - also see: http://highscalability.com/ 

I happened to come across Ikai Lan's blog  [@ikai]- a treasure trove of interesting posts (and he's also very active on github -  https://github.com/ikai)

On one of his posts - Ikai mentioned Google Apps Script - which I haven't looked at previously - and in particular - there was a mention of the Google Finance Services - which I want to remember to come back and review later: https://developers.google.com/apps-script/service_finance
See Tutorials: https://developers.google.com/apps-script/articles 

Because of some recent performance tuning experiences  ['challenges'] observed in a production environment with a 3rd party vendor's commercial Java application using Hibernate - I was also intrigued that Ikai also had a post providing examples of jOOQ - a DSL for creating the ORM between Java and SQL. 

Finally, Ikai's write-up on LinkedIn's use of node.js had a mention regarding netty - which I haven't looked at in quite awhile - so this is another item to put on my reminder list for later research.

2012-10-02

2012-10-02 Tuesday - SHA-3 winner (Keccak)

http://csrc.nist.gov/groups/ST/hash/sha-3/winner_sha-3.html

NIST announced Keccak as the winner of the SHA-3 Cryptographic Hash Algorithm Competition and the new SHA-3 hash algorithm in a press release issued on October 2, 2012Keccak was designed by a team of cryptographers from Belgium and Italy, they are:
    • Guido Bertoni (Italy) of STMicroelectronics,
    • Joan Daemen (Belgium) of STMicroelectronics,
    • Michaël Peeters (Belgium) of NXP Semiconductors, and
    • Gilles Van Assche (Belgium) of STMicroelectronics.


http://keccak.noekeon.org/
From keccak web site:

Keccak makes use of the sponge construction and is hence a sponge function family.
The design philosophy of Keccak is the hermetic sponge strategy. It uses the sponge construction for having provable security against all generic attacks. It calls a permutation that should not have structural properties with the exception of a compact description. By structural properties we mean properties that a typical random permutation does not have.

Keccak can be considered as a successor of RadioGatún. However, it has a very different design philosophy. The transformation applied to the state of RadioGatún in between the insertion of input blocks or extraction of output blocks is a simple round function. This round function has algebraic degree two and thus does not attempt to be free of structural properties. Therefore, unlike Keccak, RadioGatún requires blank rounds. Moreover, RadioGatún is not a sponge function as its iteration mode does not follow the sponge construction.
The permutation Keccak-f has the following properties:
  • It consists of the iteration of a simple round function, similar to a block cipher without a key schedule.
  • The nominal version of Keccak-f operates on a 1600-bit state. There are 6 other state widths, though: 25, 50, …, 800.
  • The choice of operations is limited to bitwise XOR, AND and NOT and rotations. There is no need for table-lookups, arithmetic operations, or data-dependent rotations.
About the performance of Keccak:
  • In software, Keccak[] takes about 13 cycles per byte on the reference platform defined by NIST.
  • In hardware, it is fast and compact, with area/speed trade-offs.
  • It is suitable for DPA-resistant implementations both in hardware and software.
Keccak can be used for:
  • keyed or randomized modes simply by prepending a key or salt to the input message;
  • generating infinite outputs, making it suitable as a stream cipher or mask generating function.
In these cases, the usage of the sponge construction allows for modes that are provably secure against generic attacks.

2012-09-30

2012-09-30 Sunday - JavaOne 2012 Keynote

I'm watching the JavaOne keynote live:
http://www.oracle.com/javaone/index.html

I'll have some notes added to this shortly...



Sunday, September 30, 2012
4pm-7pm JavaOne Keynote

Java EE 7 targeted for sometime in 2013 (?)

Project Nashorn



JavaFX Update
- JavaFX available on Linux/ARM  and Scene Builder for Linux
- JavaFX 2.2 and beyond
-- JDK 8 plans include 3D, 3rd party controls
-- Intended as a replacement for Swing
- JavaFX will be fully open sourced by the end of 2012 (?)


Java SE 9 and Beyond
- Project Sumatra will enable Java applications to leverage multicore CPU an parallel processors ["Write once, run anywhere extended to the heterogeneous platform"]
-- http://openjdk.java.net/projects/sumatra/
-- "to enable Java applications to take advantage of graphics processing units (GPUs) and accelerated processing units (APUs)--whether they are discrete devices or integrated with a CPU--to improve performance."

JKD 8
- to be feature complete in January 2013
- Developer Preview available in February
- George Saab called for JDK 8 "test pilots"




Java Dolphin Project - open sourced
- https://github.com/canoo/open-dolphin
 - JavaFX data integration project

Java Embedded
- Offerings: Java Card, ME-E, OJEC, SE-E
- New Embedded Releases (Java ME Embedded 3.2, Java Embedded Suite 7.0)

- ehs5 released today - smallest M-M capable for Java Embedded device









Java EE (Cameron Purdy presented, coherence creator)
- Focus and Direction: Standard, Productivity, Portability, Extensibility, Modularity
- 14 vendors have passed EE 6 TCK
- Java EE 7 for 2013
- Scale to build dynamic HTML 5 Apps [WebSockets, Servlet 3.1 NIO, Server Sent Events, JSON, REST
- Continued Productivity Focus (more API pruning, built on Java SE 7, broader uptake of Dependency Injection)
- and with caching (JSR 107) and  Batch Applications for the Java Platform (contributed by IBM) JSR 352 http://www.jcp.org/en/jsr/detail?id=352
- Java EE 7 Cloud features to be delayed until 2015 (targeted for Java EE 8 Platform)
- Java EE Persistence for NoSQL - no existing NoSQL standard yet
- EclipseLink NoSQL - JPA Style
-- MongoDB
-- Oracle NoSQL
-- Cassandra planned
-- more coming...
- WebSocket in Java EE 7 already in GlassFish
- Java EE 8: "Incremental delivery of JSRs"
- Jigsaw modularity with Java SE 9



Java EE Past, Present, Future







http://www.nikeinc.com is looking to hire Java programmers...


Oracle Certification Exam Guides
OCA/OCP Oracle Database 11g All-in-One Exam Guide with CD-ROM
Exams 1Z0-051, 1Z0-052, 1Z0-053
http://www.mhprofessional.com/product.php?isbn=0071629181

OCA Oracle Database SQL Certified Expert Exam Guide (Exam 1Z0-047)
http://www.mhprofessional.com/product.php?isbn=0071614214

Oracle Solaris 11 System Administration The Complete Reference
http://www.mcgrawhill.ca/professional/products/9780071790420/oracle+solaris+11+system+administration+the+complete+reference/

OCA: Oracle Database 11g Administrator Certified Associate Study Guide: (Exams1Z0-051 and 1Z0-052)
http://www.wiley.com/WileyCDA/WileyTitle/productCd-0470395125,descCd-authorInfo.html


Kaplan: Oracle Learning Tools and Practice Tests
http://www.selftestsoftware.com/certprep-materials/oracle.kap

"If I found a spaceship, I would never ever have to talk about the Titanic again"
- Dr. Robert Ballard (located Titanic)


IBM: Jason McGee discussed Java Applications in the Cloud - and Cloud Challenges for Java...
- share more, cooperate, use less, expolit
- The Patterns Approach for describing Cloud based Applications/Systems
-- workload pattern, virtual application instance

IBM: Java Applications in the Cloud (IBM's Jata CTO talked about multi-JVM deployments in the cloud)
- Sharing
- J9 JVMs using sharing to reduce costs
-- shared classes cache for read-only shared artifacts (bytecodes)
-- Dynamic AOT (ahead-of-time code) - reuse JIT code from multiple JVMs
-- Reduce memory use by 20%, improving startup time 10-30%
- Multitenancy
-- JVMs evolution to support isolation within a single JVM
--- Single copy of code, multiple copy of static variables
--- resource management within isolation context
-- Goal: 10s of K vs MBs per tenant safely

- 'Liberty' Profile - for Web, OSGi and Mobile Apps
-- Lightweight Runtime for Cloud
--- WEb profile server < 50 MB zip
--- Small memory footprint < 50 MB
--- Fast server start times < 2 secs
-- Standards Based Modularity for Cloud
--- Java EE++ built on OSGi modules and services
--- Modularity in Java SE 6 and up
-- Developer First Focus
--- Simple server configuration
--- fast easy setup
--- Integrated Eclipse AppDev tools
--- No restart required code changes
-- Dynamic Modular Runtime
--- ...

- Dynamic Behavior
-- Dynamic memory resize, processor reallocation and app migration
--- JVM will react in real-time to resource events
--- Integration across JVM/OS/HV for best performance


IBM: New System Z recently announced...
- New 5.5 GHz 6-core processor chip, large caches to optimize data serving 2nd gen OOO design
- Hardware Transaction Memory (HTM)
- Run-time Instrumentation (RI)
- 2GB page frames - improved performance targeting 64-bit heaps
- Page-able 1MB large pages using flash
- New software hints/directives
- New trap instructions
- Up-to 45% improvement in throughput amongst Java workloads measured with zEC12




 



   IBM: Hardware Matters (Jason McGee, Chief Architect Cloud Computing, IBM Distinguished Engineer)
- hardware is changing and evolving rapidly
-- move to solid state, multi-core processors, Low latency high-bandwidth networks (RDMA), Advanced Energy Management, Storage Cloud Memory
- IBM: JVM Support for Multiple Languages (Jason McGee, Chief Architect Cloud Computing, IBM Distinguished Engineer)


2012-09-30 Sunday - Strange Loop 2012 Trip Report


I have several paragraphs and photos to add to this posting - but will need to come back to this in a few hours.

https://github.com/strangeloop/strangeloop2012/tree/master/slides


New Languages:
http://www.shenlanguage.org/learn-shen/index.html

http://roy.brianmckenna.org/

http://julialang.org/

http://www.rust-lang.org

http://elixir-lang.org/


Other good write-ups I've recently found:

Strange Loop Emerging Languages Camp Recap: Julia, Grace, Rust, and a Bandicoot 

http://www.ripariandata.com/blog/strange-loop-emerging-languages-camp-recap-julia-elixer-a-bandicoot/

https://gist.github.com/3763157


Interesting Links Mentioned/Referenced/Found during various sessions:



http://haskell.cs.yale.edu/wp-content/uploads/2011/01/yampa-arcade.pdf

https://github.com/ServiceStack/ServiceStack/wiki/New-Api

http://shaffner.us/cs/papers/tarpit.pdf
Moseley and Marks (2006)
Complexity caused by state and control
close the loop - process

http://www.slideshare.net/shinolajla/taxonomy-ofscala


http://c2.com/cgi/wiki?BlubParadox

http://www.paulgraham.com/avg.html
http://c2.com/cgi/wiki?BeatingTheAverages

http://www.eecs.harvard.edu/~mdw/proj/seda/
http://www.eecs.harvard.edu/~mdw/papers/quals-seda.pdf

http://www.altjs.org

http://www.emscripten.org

http://www.slideshare.net/nathanmarz/runaway-complexity-in-big-data-and-a-plan-to-stop-it

Cross-Compile XNA
http://www.jsil.org

http://worrydream.com
http://worrydream.com/Tangle/
http://worrydream.com/#!/Bio

https://speakerdeck.com/u/czarneckid/p/real-world-redis
research: 30 second guide to using REDIS [for distributed datastore]


http://www.information-management.com/news/40-Vendors-We-Are-Watching-2012-10023168-1.html?zkPrintable=1&nopagination=1
http://www.cs.nyu.edu/cs/faculty/shasha/papers/hpts.pdf






2012-09-16

2012-09-16 Sunday - Disruptor Resources

High Performance Inter-Thread Messaging Library
http://code.google.com/p/disruptor/

Concurrent Programming Using the Disruptor
[Trisha Gee's presentation to the London Java Community at Skillsmatter on 1st March 2012]
http://www.slideshare.net/trishagee/a-users-guide-to-the-disruptor

Whitepapers / Presentations
Disruptor: High performance alternative to bounded queues for exchanging data between concurrent threads [May 2011]

Martin Fowler's post [July 12, 2011]

Martin Fowler
QCON Video [Dec 2010]:
LMAX - How to Do 100K TPS at Less than 1ms Latency
Concurrent Programming Using the Disruptor
[Trisha Gee's presentation to the London Java Community at Skillsmatter on 1st March 2012]


Changelog
http://code.google.com/p/disruptor/wiki/ChangeLog

Sample Code
http://code.google.com/p/disruptor/wiki/CodeExampleDisruptor2x

Getting Started
http://code.google.com/p/disruptor/wiki/GettingStarted


2012-09-16 Sunday - NVIDIA’s CUDA programming framework


"CUDA™ is a parallel computing platform and programming model invented by NVIDIA. It enables dramatic increases in computing performance by harnessing the power of the graphics processing unit (GPU)."

CUDA Home

CUDA Developer Zone

CUDA Toolkit

CUDA downloads

CUDA documentation
CUDA training & education


NVIDIA NSIGHT Visual Studio Edition

NVIDIA NSIGHT Eclipse Edition

CUDA Language Solutions

Python: PyCUDA

CUDA Libraries:

2012-09-16 Sunday - One Week Until Strange Loop 2012

Next weekend I'm heading to St. Louis for the Strange Loop 2012 conference September 23-25. 

The conference feed on Twitter: @strangeloop_stl

In particular, there are two workshops I signed-up for on Sunday
the 23rd that look very interesting:

https://thestrangeloop.com/sessions/concurrent-programming-using-the-disruptor 
 
The Disruptor is an open source concurrent programming framework developed by LMAX Exchange, a financial exchange based in London.
The most interesting thing about it is how the Disruptor has promoted discussions about approaches to writing high performance code, and shown that Java is a serious contender in this space.

Contrary to the current trend of hiding multi-threaded concerns behind languages or frameworks, the Disruptor provides a way to do quite the opposite – to enable developers to think  about how to parallelise their architecture in a straightforward and easy to code fashion. In this workshop, Trisha Gee from LMAX Exchange will show examples of how to use the  Disruptor to share data between threads, and walk you through how to create your own application using the Disruptor.
 
https://thestrangeloop.com/sessions/gpu-programming-crash-course 
 
This course is for developers who want to learn how to program and utilize the parallel computing power of the Graphics Processing Unit (GPU) using NVIDIA’s CUDA programming framework  and, time permitting, OpenCL (although the many of the basic concepts are very similar).

The course will start by introducing the concepts of general purpose GPU programming and go into the process of installing and setting up the development environment on the 3 OS’s  that support CUDA. We will also talk about the different language bindings for languages like Java, Python and Ruby.

The main gist of the course will involve learning the concepts of CUDA memory management together with the hardware capability of the GPU we are developing on.

Once we are familiar with the core concepts, we will talk about interoperability of the CUDA library with rendering and also the use of atomic primitives to accomplish things which  are quite trivial in the traditional CPU case. Then we will talk about the concept of CUDA streams.

We will talk about the different external libraries both 3rd party as well as provided by NVIDIA optimized for the GPU, that implement many useful algorithms for applications ranging  from Finance to Medical Imaging and Machine Learning.

Finally we will end the course by talking about GPUs in the cloud as a service and multi-GPU APIs.
 
 
 
The Mon/Tue conference sessions are also full of interesting topics:
https://thestrangeloop.com/schedule
 
  
 

2012-09-12

2012-09-12 Wednesday - JavaOne 2012 Session Schedule



Sadly, my schedule is rather jammed this year - and I won't be able to attend JavaOne in San Francisco in September 30th - October 4th this year (2012)

However, I will look forward to checking back on the decks that may eventually be published for the various sessions:
http://glassfish.java.net/javaone2012/

2012-09-03

2012-09-02 Monday - Book Review: VIsual Models for Software Requirements



I review for the O'Reilly Blogger Review Program 


 

Book Review: Visual Models for Software Requirements 

by Joy Beatty, Anthony Chen
http://oreillynet.com/pub/reviewproduct/827

Summary:

I'll start-off by saying that if you have no process or discipline in your organization's approach to documenting and capturing software requirements - there are a lot of good suggestions covered in this book. Also, if your approach to documenting software requirements lacks an appreciation for the business concerns - the Business Objectives modeling discussions in the book may be helpful and/or useful for your software engineering team.  However, if your software requirements management processes are even moderately mature - and if already using Microsoft-centric tools to capture and manage software requirements - you will not find much that is new, novel, or of benefit in this book.

Positives:
  • An attempt to provide a comprehensive approach with an emphasis on business concerns
  • Coverage of the importance of Business Process modeling
  • Highlights the limitations of UML to capture business level concerns'
  • Focus on Business Objectives modeling
  • Discussion/coverage of Key Performance Indicator Models (KPIM)
  • 'Feature Trees' [although, for any moderately complex effort, the choice of a visual modeling tool for drawing a Feature Tree - that lacks zoom/collapse capability of nodes - is problematic]
  • Inclusion of helpful links to references and additional resources at the end of each chapter.

Negatives:
  • Lack of integrating the Business Objectives modeling concepts with other mature software engineering models (e.g. Zachman Framework,  Open Group TOGAF)
  • Microsoft-centric / bias in promoting tooling
  • Lack of any significant discussion of other possible software requirement tooling for visual modeling (i.e. non-Microsoft-centric tooling)
  • Lack of appreciation / coverage of how to minimize the manual maintenance of traceability across artifacts
  • Promoting a software requirements approach that relies on Sharepoint as a primary mechanism for publication/distribution is an abysmal experience as the size and duration of a project grows.
    • Link-rot: Over time, Sharepoint sites are renamed, restructured, reorganized.  This results in an untenable maintenance effort for most organizations.  Links embedded in MS Word, PowerPoint, Excel, and Visio documents are routinely broken - and identifying where to change all of the link references is also challenging..
    • Painful and labor intensive efforts to automate any generation of cross-references or matrices [when visual models and requirements are stored as MS Office documents across multiple SharePoint sites].

 In choosing this book to review - I was hoping to find some new insights to capturing requirements - via visual models that might eliminate some of the 'pain' that most often is found to exists in the requirements management processes (and tooling) adopted by most large organizationsSince this book is written with a bias toward Microsoft(tm) technologies (e.g. SharePoint) - teams that attempt to adopt the suggested approach will eventually run into the same types of long-term problems and 'pain' that I have observed firsthand on many projects - across several different organizations.

At the end of the day - after having lived with the pain of an absence of integrated tooling for the capture and management of software requirements on too many projects - I must conclude that the authors lack of a integrated vision of tooling for visual software requirements management leads me to suggest avoiding this book for the majority of potential readers.


2012-10-09 Tuesday Update:

Tonight I received a follow-up question in response to my Amazon review for this book:

 Kelvin, 
I found your review of the book "Visual Models for Software Requirements" to be very helpful and intriguing. The negatives you list for the book touch on some items I am looking to find a solution for. I am not a programmer, yet I aspire to use the tools of programmers to manage information in the form of data files, Word documents, graphics, Excel spreadsheets, PDF, text, etc. I was thinking of looking into Sharepoint as a means to keep the information organized, cross-referenced, searchable, and shareable. Then I read your comments about the "absymal experience" that Sharepoint becomes when used as the primary mechanism for publication/distribution. Your description of the broken links embedded in Word, Powerpoint, Excel, etc. is exactly what I want to avoid.
So this brings me to my reason for writing to you. You mentioned that you have lived though the pain of an absence of integrated tooling for the capture and management of software requirements, which seems to imply that you now live relatively pain free. What tools do you use to capture, organize, maintain, and share requirements? I'm thinking what you have learned and are willing to recommend may provide me with ideas for something that may work for me.
Thank you for your time. 

xxxxx xxxxx




Here's my reply:

I'm happy to share with you my thoughts/recommendations - although it isn't a silver bullet. Even if I found the perfect tool - there are still challenges. For example, enabling collaborative editing of content with most visual modeling tools - doesn't scale well across organizational boundaries. In particular, for some industries that are very sensitive to a default security approach of 'nothing-shared' - the willingness of the organization to allow that cross-boundary access to information is often a battle that cannot be won.

Two approaches that I've used in the past:

1) Leveraging a wiki tool (such as wikmedia or TikiWiki)
 PROs
- Allows easy creation and editing of content - as well as deep linking of the content within a single application container.
- Content can be ported (or archived for a snapshot) by exporting data from the wiki database.
- No expensive application licensing
- Scales well - wiki content is easily searched
 CONs:
- requires organization discipline in how content is arrange and organized
- wiki page links can become orphaned [but most such tools have a feature to easily identify orphaned pages - try answering that same question with an organization containing many Sharepoint repositories - that exist as independent silos]
- wiki's don't have inherent modeling / diagramming capabilities [however, there are options for creating custom plug-ins - so that may be a surmountable challenge - with some investment upfront]
- requires some discipline (and establishment of organization / processes) for how to organize and manage externally generated content (that may be either linked to, or uploaded to a central folder - for reference in wiki pages)

2) Leveraging a modeling tool that supports a centralized repository (such as Sparx Enterprise Architect, or other similar commercial products)

PROs:
- Supports publishing easily navigated HTML content
- Models (and model elements) are logically connected - so moving an element or package up/down in the hierarchy maintains the physical links to the content
- Supports rich annotation and complex relationship associations
- Supports automated generation of traceability matrices
- Easily supports generation of comprehensive documentation
- Supports establishing traceability for multiple purposes (e.g. testing, requirements, use cases, used-by, uses, etc.)
- supports searching across the entire repository and filtering rules
- supports capturing knowledge in a single repository - across organization roles (business analysts, architects, developers, testers, data modelers, network/infrastructure

CONs:
- Per-seat (or floating) license costs can be a burden for large organizations
- Content published to HTML format must be round-trip updated/published to update the HTML content.
- diagrams that can be exported (e.g. .png,.jpg, .gif, etc) are not always of an optimal resolution for viewing in slide decks or word documents.
- [typically] requires connection to the repository to update content (although there are processes that can be established to bridge this - for example, leveraging version control for model check-in / check-out)

These are some of the trade-offs that immediately come to mind - and are all preferable to the nightmare of trying to locate information across multiple Sharepoint repositories - and links that may be broken due to sites being restructured, moved, or deleted.

2012-08-22

2012-08-22 Wednesday - Data Governance Tools

Today I'm researching tools that may support a portion of a Data Governance process.

The development/production environment includes both Oracle and Microsoft SQL Server databases.

Some of the challenges I'm seeking to address with a Data Governance process / tooling includes answering the following types of questions:
  • What changes occurred from the previous to the next version of the  schema?
  • How can we migrate data from one version of the schema to the next?
  • How can we migrate data from one environment to another?
  • If we need to migrate data from a Prod to a Dev environment, how can we ensure that data is reliably 'cleansed' or 'masked' to avoid Personally Identifiable Information (PII) from'leaking' out of a Prod environment.
  • How to avoid having to recreate large volumes of existing test data from scratch - when major schema and/or data changes result from application changes

 Some of the desired features in a schema comparison tool would include the following:
  • Command Line interface for automatic generation of reports
  • DDL import of the previous/next schema definition files
  • Generation of a report in PDF, HTML, RTF formats to document both schemas
  • Reports to identify the deltas between the two schemas
  • Generation of the DDL to alter the previous schema to look like the future schema
However, beyond the simple task of schema comparison - Data Governance also encompasses the  challenges of the following tasks:
  • Migrating data - and supplying the transformation rules as needed
    • Splitting data from 1:N fields
    • Combining data from N:1 fields
    • Populating new data fields (via default value, lookup value, etc.)
    • Computing new values f(x):y

My initial inclination is to suggest the Altova Database Schema Differencing Tool (DiffDog(R) 2012)

Other Resources to Consider

    2012-06-16

    2012-06-16 Saturday - JavaScript Design Patterns

    I recently came across the following interesting articles on JavaScript Design Patterns:

    This article...
    http://macwright.org/2012/06/04/the-module-pattern.html

    led me here...

    'Learning JavaScript Design Patterns' by Addy Osmani
    http://addyosmani.com/resources/essentialjsdesignpatterns/book/



    and finally, here, for more JavaScript goodness than I can grok:
    http://bost.ocks.org/mike/

    2012-05-28

    2012-05-28 Monday - White House launches new digital government strategy


    White House launches new digital government strategy
    http://radar.oreilly.com/2012/05/white-house-launches-new-digit.html

    Federal CIO Steven VanRoekel and CTO Todd Park say open data will be the new default.

    In this memorandum, the president directs each major federal agency in the United States to make two key services that American citizens depend upon available on mobile devices within the next 12 months and to make "applicable" government information open and machine-readable by default. President Obama directed federal agencies to do two specific things: comply with the elements of the strategy by May 23, 2013 and to create a "/developer" page on ever major federal agency's website.

    2012-05-17

    2012-05-17 Thursday - Recommened Architecture Books

    [updated 2012-11-21 - corrected bad links]

    A colleague recently asked me for suggested books to add to his personal library on the topic of documenting software architectures - here are a few of my initial suggestions:


    I have the previous edition of this book - it is a good overall foundation reference for documenting architectures:

    Documenting Software Architectures: Views and Beyond (2nd Edition), Paul C. Clements




    These are what I consider must-haves:

    Patterns of Enterprise Application Architecture, by Martin Fowler




    Enterprise Integration Patterns: Designing, Building, and Deploying Messaging Solutions, by Gregor Hohpe, and Bobby Woolf



    SOA Design Patterns, by Thomas Erl




    Every architect should have at least one great algorithms book, this is my preferred text:
    Introductions to Algorithms, by Thomas Cormen, et al.





    I've recently addd the following book to my library, and have submitted a review on Amazon (as of 11/22/2012).  Although I've looked at the book with a critical eye - it does have some merit as an addition to my reference library:

    Service Design Patterns, Fundamental Design Solutions for SOAP/WSDL and RESTful Web SErvices, by Robert Daigneau




    I plan to add these to my own library:

    Refactoring to Patterns, by Joshua Kerievsky




    Java Application Architecture: Modularity Patterns with Examples Using OSGi, by Kirk Knoernschild





    Software Architecture in Practice (3rd Edition), by Len Bass, Paul Clements, and Rick Kazman



    2012-05-12

    Named Entity Recognizer (NER)

    I happened to come across this today:
    http://nlp.stanford.edu/software/CRF-NER.shtml

    Stanford NER (also known as CRFClassifier) is a Java implementation of a Named Entity Recognizer. Named Entity Recognition (NER) labels sequences of words in a text which are the names of things, such as person and company names, or gene and protein names. The software provides a general (arbitrary order) implementation of linear chain Conditional Random Field (CRF) sequence models, coupled with well-engineered feature extractors for Named Entity Recognition

     http://nlp.stanford.edu/software/jenny-ner-2007.pdf

    Language-Independent Named Entity Recognition (II)

    http://www.cnts.ua.ac.be/conll2003/ner/
    "Named entities are phrases that contain the names of persons, organizations, locations, times and quantities."

    2012-05-08

    2012-05-08 Tuesday - Strange Loop 2012

    Completed my registration for Strange Loop 2012, Sept. 23-25 in St. Louis https://thestrangeloop.com/

    I signed up for the following Early Workshops:

    GPU Programming Crash Course, by Trish Gee
    https://thestrangeloop.com/sessions/concurrent-programming-using-the-disruptor

    Concurrent Programming Using the Disruptor
    https://thestrangeloop.com/sessions/gpu-programming-crash-course

    The sessions are an insane amount of goodness

    2012-04-22

    2012-04-22 Sunday - SQL Server 2012 Multidimensional Modeling

    This blog post is a reminder of some tutorials I want to spend time working through [when I have a bit more free time]

    Tutorials for SQL Server 2012
    http://technet.microsoft.com/en-us/library/hh231699.aspx

    AdventureWorks Sample Data Warehouse
    http://technet.microsoft.com/en-us/library/ms124623.aspx

    Multidimensional Modeling (Adventure Works Tutorial)
    http://technet.microsoft.com/en-us/library/ms170208.aspx

    SQL Server 2012 Product Documentation
    http://technet.microsoft.com/en-us/library/bb418433%28v=sql.10%29.aspx

    Developer Reference for SQL Server 2012
    http://technet.microsoft.com/en-us/library/dd206988.aspx



    SQL Server Data Warehouse Cribsheet (by Robert Sheldon)
    http://www.simple-talk.com/sql/learn-sql-server/sql-server-data-warehouse-cribsheet/

    Copyright

    © 2001-2021 International Technology Ventures, Inc., All Rights Reserved.