Sunday, September 30, 2012

2012-09-30 Sunday - JavaOne 2012 Keynote

I'm watching the JavaOne keynote live:
http://www.oracle.com/javaone/index.html

I'll have some notes added to this shortly...



Sunday, September 30, 2012
4pm-7pm JavaOne Keynote

Java EE 7 targeted for sometime in 2013 (?)

Project Nashorn



JavaFX Update
- JavaFX available on Linux/ARM  and Scene Builder for Linux
- JavaFX 2.2 and beyond
-- JDK 8 plans include 3D, 3rd party controls
-- Intended as a replacement for Swing
- JavaFX will be fully open sourced by the end of 2012 (?)


Java SE 9 and Beyond
- Project Sumatra will enable Java applications to leverage multicore CPU an parallel processors ["Write once, run anywhere extended to the heterogeneous platform"]
-- http://openjdk.java.net/projects/sumatra/
-- "to enable Java applications to take advantage of graphics processing units (GPUs) and accelerated processing units (APUs)--whether they are discrete devices or integrated with a CPU--to improve performance."

JKD 8
- to be feature complete in January 2013
- Developer Preview available in February
- George Saab called for JDK 8 "test pilots"




Java Dolphin Project - open sourced
- https://github.com/canoo/open-dolphin
 - JavaFX data integration project

Java Embedded
- Offerings: Java Card, ME-E, OJEC, SE-E
- New Embedded Releases (Java ME Embedded 3.2, Java Embedded Suite 7.0)

- ehs5 released today - smallest M-M capable for Java Embedded device









Java EE (Cameron Purdy presented, coherence creator)
- Focus and Direction: Standard, Productivity, Portability, Extensibility, Modularity
- 14 vendors have passed EE 6 TCK
- Java EE 7 for 2013
- Scale to build dynamic HTML 5 Apps [WebSockets, Servlet 3.1 NIO, Server Sent Events, JSON, REST
- Continued Productivity Focus (more API pruning, built on Java SE 7, broader uptake of Dependency Injection)
- and with caching (JSR 107) and  Batch Applications for the Java Platform (contributed by IBM) JSR 352 http://www.jcp.org/en/jsr/detail?id=352
- Java EE 7 Cloud features to be delayed until 2015 (targeted for Java EE 8 Platform)
- Java EE Persistence for NoSQL - no existing NoSQL standard yet
- EclipseLink NoSQL - JPA Style
-- MongoDB
-- Oracle NoSQL
-- Cassandra planned
-- more coming...
- WebSocket in Java EE 7 already in GlassFish
- Java EE 8: "Incremental delivery of JSRs"
- Jigsaw modularity with Java SE 9



Java EE Past, Present, Future







http://www.nikeinc.com is looking to hire Java programmers...


Oracle Certification Exam Guides
OCA/OCP Oracle Database 11g All-in-One Exam Guide with CD-ROM
Exams 1Z0-051, 1Z0-052, 1Z0-053
http://www.mhprofessional.com/product.php?isbn=0071629181

OCA Oracle Database SQL Certified Expert Exam Guide (Exam 1Z0-047)
http://www.mhprofessional.com/product.php?isbn=0071614214

Oracle Solaris 11 System Administration The Complete Reference
http://www.mcgrawhill.ca/professional/products/9780071790420/oracle+solaris+11+system+administration+the+complete+reference/

OCA: Oracle Database 11g Administrator Certified Associate Study Guide: (Exams1Z0-051 and 1Z0-052)
http://www.wiley.com/WileyCDA/WileyTitle/productCd-0470395125,descCd-authorInfo.html


Kaplan: Oracle Learning Tools and Practice Tests
http://www.selftestsoftware.com/certprep-materials/oracle.kap

"If I found a spaceship, I would never ever have to talk about the Titanic again"
- Dr. Robert Ballard (located Titanic)


IBM: Jason McGee discussed Java Applications in the Cloud - and Cloud Challenges for Java...
- share more, cooperate, use less, expolit
- The Patterns Approach for describing Cloud based Applications/Systems
-- workload pattern, virtual application instance

IBM: Java Applications in the Cloud (IBM's Jata CTO talked about multi-JVM deployments in the cloud)
- Sharing
- J9 JVMs using sharing to reduce costs
-- shared classes cache for read-only shared artifacts (bytecodes)
-- Dynamic AOT (ahead-of-time code) - reuse JIT code from multiple JVMs
-- Reduce memory use by 20%, improving startup time 10-30%
- Multitenancy
-- JVMs evolution to support isolation within a single JVM
--- Single copy of code, multiple copy of static variables
--- resource management within isolation context
-- Goal: 10s of K vs MBs per tenant safely

- 'Liberty' Profile - for Web, OSGi and Mobile Apps
-- Lightweight Runtime for Cloud
--- WEb profile server < 50 MB zip
--- Small memory footprint < 50 MB
--- Fast server start times < 2 secs
-- Standards Based Modularity for Cloud
--- Java EE++ built on OSGi modules and services
--- Modularity in Java SE 6 and up
-- Developer First Focus
--- Simple server configuration
--- fast easy setup
--- Integrated Eclipse AppDev tools
--- No restart required code changes
-- Dynamic Modular Runtime
--- ...

- Dynamic Behavior
-- Dynamic memory resize, processor reallocation and app migration
--- JVM will react in real-time to resource events
--- Integration across JVM/OS/HV for best performance


IBM: New System Z recently announced...
- New 5.5 GHz 6-core processor chip, large caches to optimize data serving 2nd gen OOO design
- Hardware Transaction Memory (HTM)
- Run-time Instrumentation (RI)
- 2GB page frames - improved performance targeting 64-bit heaps
- Page-able 1MB large pages using flash
- New software hints/directives
- New trap instructions
- Up-to 45% improvement in throughput amongst Java workloads measured with zEC12




 



   IBM: Hardware Matters (Jason McGee, Chief Architect Cloud Computing, IBM Distinguished Engineer)
- hardware is changing and evolving rapidly
-- move to solid state, multi-core processors, Low latency high-bandwidth networks (RDMA), Advanced Energy Management, Storage Cloud Memory
- IBM: JVM Support for Multiple Languages (Jason McGee, Chief Architect Cloud Computing, IBM Distinguished Engineer)


2012-09-30 Sunday - Strange Loop 2012 Trip Report


I have several paragraphs and photos to add to this posting - but will need to come back to this in a few hours.

https://github.com/strangeloop/strangeloop2012/tree/master/slides


New Languages:
http://www.shenlanguage.org/learn-shen/index.html

http://roy.brianmckenna.org/

http://julialang.org/

http://www.rust-lang.org

http://elixir-lang.org/


Other good write-ups I've recently found:

Strange Loop Emerging Languages Camp Recap: Julia, Grace, Rust, and a Bandicoot 

http://www.ripariandata.com/blog/strange-loop-emerging-languages-camp-recap-julia-elixer-a-bandicoot/

https://gist.github.com/3763157


Interesting Links Mentioned/Referenced/Found during various sessions:



http://haskell.cs.yale.edu/wp-content/uploads/2011/01/yampa-arcade.pdf

https://github.com/ServiceStack/ServiceStack/wiki/New-Api

http://shaffner.us/cs/papers/tarpit.pdf
Moseley and Marks (2006)
Complexity caused by state and control
close the loop - process

http://www.slideshare.net/shinolajla/taxonomy-ofscala


http://c2.com/cgi/wiki?BlubParadox

http://www.paulgraham.com/avg.html
http://c2.com/cgi/wiki?BeatingTheAverages

http://www.eecs.harvard.edu/~mdw/proj/seda/
http://www.eecs.harvard.edu/~mdw/papers/quals-seda.pdf

http://www.altjs.org

http://www.emscripten.org

http://www.slideshare.net/nathanmarz/runaway-complexity-in-big-data-and-a-plan-to-stop-it

Cross-Compile XNA
http://www.jsil.org

http://worrydream.com
http://worrydream.com/Tangle/
http://worrydream.com/#!/Bio

https://speakerdeck.com/u/czarneckid/p/real-world-redis
research: 30 second guide to using REDIS [for distributed datastore]


http://www.information-management.com/news/40-Vendors-We-Are-Watching-2012-10023168-1.html?zkPrintable=1&nopagination=1
http://www.cs.nyu.edu/cs/faculty/shasha/papers/hpts.pdf






Sunday, September 16, 2012

2012-09-16 Sunday - Disruptor Resources

High Performance Inter-Thread Messaging Library
http://code.google.com/p/disruptor/

Concurrent Programming Using the Disruptor
[Trisha Gee's presentation to the London Java Community at Skillsmatter on 1st March 2012]
http://www.slideshare.net/trishagee/a-users-guide-to-the-disruptor

Whitepapers / Presentations
Disruptor: High performance alternative to bounded queues for exchanging data between concurrent threads [May 2011]

Martin Fowler's post [July 12, 2011]

Martin Fowler
QCON Video [Dec 2010]:
LMAX - How to Do 100K TPS at Less than 1ms Latency
Concurrent Programming Using the Disruptor
[Trisha Gee's presentation to the London Java Community at Skillsmatter on 1st March 2012]


Changelog
http://code.google.com/p/disruptor/wiki/ChangeLog

Sample Code
http://code.google.com/p/disruptor/wiki/CodeExampleDisruptor2x

Getting Started
http://code.google.com/p/disruptor/wiki/GettingStarted


2012-09-16 Sunday - NVIDIA’s CUDA programming framework


"CUDA™ is a parallel computing platform and programming model invented by NVIDIA. It enables dramatic increases in computing performance by harnessing the power of the graphics processing unit (GPU)."

CUDA Home

CUDA Developer Zone

CUDA Toolkit

CUDA downloads

CUDA documentation
CUDA training & education


NVIDIA NSIGHT Visual Studio Edition

NVIDIA NSIGHT Eclipse Edition

CUDA Language Solutions

Python: PyCUDA

CUDA Libraries:

2012-09-16 Sunday - One Week Until Strange Loop 2012

Next weekend I'm heading to St. Louis for the Strange Loop 2012 conference September 23-25. 

The conference feed on Twitter: @strangeloop_stl

In particular, there are two workshops I signed-up for on Sunday
the 23rd that look very interesting:

https://thestrangeloop.com/sessions/concurrent-programming-using-the-disruptor 
 
The Disruptor is an open source concurrent programming framework developed by LMAX Exchange, a financial exchange based in London.
The most interesting thing about it is how the Disruptor has promoted discussions about approaches to writing high performance code, and shown that Java is a serious contender in this space.

Contrary to the current trend of hiding multi-threaded concerns behind languages or frameworks, the Disruptor provides a way to do quite the opposite – to enable developers to think  about how to parallelise their architecture in a straightforward and easy to code fashion. In this workshop, Trisha Gee from LMAX Exchange will show examples of how to use the  Disruptor to share data between threads, and walk you through how to create your own application using the Disruptor.
 
https://thestrangeloop.com/sessions/gpu-programming-crash-course 
 
This course is for developers who want to learn how to program and utilize the parallel computing power of the Graphics Processing Unit (GPU) using NVIDIA’s CUDA programming framework  and, time permitting, OpenCL (although the many of the basic concepts are very similar).

The course will start by introducing the concepts of general purpose GPU programming and go into the process of installing and setting up the development environment on the 3 OS’s  that support CUDA. We will also talk about the different language bindings for languages like Java, Python and Ruby.

The main gist of the course will involve learning the concepts of CUDA memory management together with the hardware capability of the GPU we are developing on.

Once we are familiar with the core concepts, we will talk about interoperability of the CUDA library with rendering and also the use of atomic primitives to accomplish things which  are quite trivial in the traditional CPU case. Then we will talk about the concept of CUDA streams.

We will talk about the different external libraries both 3rd party as well as provided by NVIDIA optimized for the GPU, that implement many useful algorithms for applications ranging  from Finance to Medical Imaging and Machine Learning.

Finally we will end the course by talking about GPUs in the cloud as a service and multi-GPU APIs.
 
 
 
The Mon/Tue conference sessions are also full of interesting topics:
https://thestrangeloop.com/schedule
 
  
 

Wednesday, September 12, 2012

2012-09-12 Wednesday - JavaOne 2012 Session Schedule



Sadly, my schedule is rather jammed this year - and I won't be able to attend JavaOne in San Francisco in September 30th - October 4th this year (2012)

However, I will look forward to checking back on the decks that may eventually be published for the various sessions:
http://glassfish.java.net/javaone2012/

Monday, September 03, 2012

2012-09-02 Monday - Book Review: VIsual Models for Software Requirements



I review for the O'Reilly Blogger Review Program 


 

Book Review: Visual Models for Software Requirements 

by Joy Beatty, Anthony Chen
http://oreillynet.com/pub/reviewproduct/827

Summary:

I'll start-off by saying that if you have no process or discipline in your organization's approach to documenting and capturing software requirements - there are a lot of good suggestions covered in this book. Also, if your approach to documenting software requirements lacks an appreciation for the business concerns - the Business Objectives modeling discussions in the book may be helpful and/or useful for your software engineering team.  However, if your software requirements management processes are even moderately mature - and if already using Microsoft-centric tools to capture and manage software requirements - you will not find much that is new, novel, or of benefit in this book.

Positives:
  • An attempt to provide a comprehensive approach with an emphasis on business concerns
  • Coverage of the importance of Business Process modeling
  • Highlights the limitations of UML to capture business level concerns'
  • Focus on Business Objectives modeling
  • Discussion/coverage of Key Performance Indicator Models (KPIM)
  • 'Feature Trees' [although, for any moderately complex effort, the choice of a visual modeling tool for drawing a Feature Tree - that lacks zoom/collapse capability of nodes - is problematic]
  • Inclusion of helpful links to references and additional resources at the end of each chapter.

Negatives:
  • Lack of integrating the Business Objectives modeling concepts with other mature software engineering models (e.g. Zachman Framework,  Open Group TOGAF)
  • Microsoft-centric / bias in promoting tooling
  • Lack of any significant discussion of other possible software requirement tooling for visual modeling (i.e. non-Microsoft-centric tooling)
  • Lack of appreciation / coverage of how to minimize the manual maintenance of traceability across artifacts
  • Promoting a software requirements approach that relies on Sharepoint as a primary mechanism for publication/distribution is an abysmal experience as the size and duration of a project grows.
    • Link-rot: Over time, Sharepoint sites are renamed, restructured, reorganized.  This results in an untenable maintenance effort for most organizations.  Links embedded in MS Word, PowerPoint, Excel, and Visio documents are routinely broken - and identifying where to change all of the link references is also challenging..
    • Painful and labor intensive efforts to automate any generation of cross-references or matrices [when visual models and requirements are stored as MS Office documents across multiple SharePoint sites].

 In choosing this book to review - I was hoping to find some new insights to capturing requirements - via visual models that might eliminate some of the 'pain' that most often is found to exists in the requirements management processes (and tooling) adopted by most large organizationsSince this book is written with a bias toward Microsoft(tm) technologies (e.g. SharePoint) - teams that attempt to adopt the suggested approach will eventually run into the same types of long-term problems and 'pain' that I have observed firsthand on many projects - across several different organizations.

At the end of the day - after having lived with the pain of an absence of integrated tooling for the capture and management of software requirements on too many projects - I must conclude that the authors lack of a integrated vision of tooling for visual software requirements management leads me to suggest avoiding this book for the majority of potential readers.


2012-10-09 Tuesday Update:

Tonight I received a follow-up question in response to my Amazon review for this book:

 Kelvin, 
I found your review of the book "Visual Models for Software Requirements" to be very helpful and intriguing. The negatives you list for the book touch on some items I am looking to find a solution for. I am not a programmer, yet I aspire to use the tools of programmers to manage information in the form of data files, Word documents, graphics, Excel spreadsheets, PDF, text, etc. I was thinking of looking into Sharepoint as a means to keep the information organized, cross-referenced, searchable, and shareable. Then I read your comments about the "absymal experience" that Sharepoint becomes when used as the primary mechanism for publication/distribution. Your description of the broken links embedded in Word, Powerpoint, Excel, etc. is exactly what I want to avoid.
So this brings me to my reason for writing to you. You mentioned that you have lived though the pain of an absence of integrated tooling for the capture and management of software requirements, which seems to imply that you now live relatively pain free. What tools do you use to capture, organize, maintain, and share requirements? I'm thinking what you have learned and are willing to recommend may provide me with ideas for something that may work for me.
Thank you for your time. 

xxxxx xxxxx




Here's my reply:

I'm happy to share with you my thoughts/recommendations - although it isn't a silver bullet. Even if I found the perfect tool - there are still challenges. For example, enabling collaborative editing of content with most visual modeling tools - doesn't scale well across organizational boundaries. In particular, for some industries that are very sensitive to a default security approach of 'nothing-shared' - the willingness of the organization to allow that cross-boundary access to information is often a battle that cannot be won.

Two approaches that I've used in the past:

1) Leveraging a wiki tool (such as wikmedia or TikiWiki)
 PROs
- Allows easy creation and editing of content - as well as deep linking of the content within a single application container.
- Content can be ported (or archived for a snapshot) by exporting data from the wiki database.
- No expensive application licensing
- Scales well - wiki content is easily searched
 CONs:
- requires organization discipline in how content is arrange and organized
- wiki page links can become orphaned [but most such tools have a feature to easily identify orphaned pages - try answering that same question with an organization containing many Sharepoint repositories - that exist as independent silos]
- wiki's don't have inherent modeling / diagramming capabilities [however, there are options for creating custom plug-ins - so that may be a surmountable challenge - with some investment upfront]
- requires some discipline (and establishment of organization / processes) for how to organize and manage externally generated content (that may be either linked to, or uploaded to a central folder - for reference in wiki pages)

2) Leveraging a modeling tool that supports a centralized repository (such as Sparx Enterprise Architect, or other similar commercial products)

PROs:
- Supports publishing easily navigated HTML content
- Models (and model elements) are logically connected - so moving an element or package up/down in the hierarchy maintains the physical links to the content
- Supports rich annotation and complex relationship associations
- Supports automated generation of traceability matrices
- Easily supports generation of comprehensive documentation
- Supports establishing traceability for multiple purposes (e.g. testing, requirements, use cases, used-by, uses, etc.)
- supports searching across the entire repository and filtering rules
- supports capturing knowledge in a single repository - across organization roles (business analysts, architects, developers, testers, data modelers, network/infrastructure

CONs:
- Per-seat (or floating) license costs can be a burden for large organizations
- Content published to HTML format must be round-trip updated/published to update the HTML content.
- diagrams that can be exported (e.g. .png,.jpg, .gif, etc) are not always of an optimal resolution for viewing in slide decks or word documents.
- [typically] requires connection to the repository to update content (although there are processes that can be established to bridge this - for example, leveraging version control for model check-in / check-out)

These are some of the trade-offs that immediately come to mind - and are all preferable to the nightmare of trying to locate information across multiple Sharepoint repositories - and links that may be broken due to sites being restructured, moved, or deleted.