Showing posts with label TensorFlow. Show all posts
Showing posts with label TensorFlow. Show all posts

2019-07-02

2019-07-02 Tuesday - TensorFlow-2.0.0-beta1 MINST Demo

On Monday night, I upgraded my TensorFlow from the 2.0.0-alpha to 2.0.0-beta1

The documentation suggests:
"The best place to start is with the user-friendly Sequential API. Create models by plugging together building blocks. Run this “Hello World” example"

References:
My "Lab.ML" Github Repository folder: examples/TensorFlow/MINST_Demo/
...with sample output:


2019-03-28

2019-03-28 Thursday - TensorFlow-2.0.0-alpha (TF 2.0 Alpha)


Today I spotted Cassie Kozyrkov's article on Hackernoon (she's Chief Decision Scientist at Google, Inc) :



Here are some of the relevant 2.0 documentation resources
    • $ pip install tensorflow==2.0.0-alpha0

35 videos from the TensorFlow Dev Summit 2019 (March 6th and 7th at the Google Event Center in Sunnyvale, CA.), touching specifically on TF 2.0, are available here:

TensorFlow Youtube Channel



I also took a moment to upgrade to the recent Python 3.7.3 release (from 3.7.2)
  

2018-07-07

2018-07-07 Saturday - CPU vs GPU for Machine Learning Performance


https://www.nextplatform.com/2017/10/13/new-optimizations-improve-deep-learning-frameworks-cpus/
"Intel has been reported to claim that processing in BigDL is “orders of magnitude faster than out-of-box open source Caffe, Torch, or TensorFlow on a single-node Xeon processor (i.e., comparable with mainstream GPU).

2017-08-09
TensorFlow* Optimizations on Modern Intel® Architecture
https://software.intel.com/en-us/articles/tensorflow-optimizations-on-modern-intel-architecture
"TensorFlow benchmarks, with CPU optimizations added, see CPU performance gain as much as 72X"

A paper presented during the 2017 International Conference on Machine Learning (ICML)

  • Deep Tensor Convolution on Multicores
    • https://arxiv.org/abs/1611.06565
    • "...Another important reason to look at CPUs is when batch size is 1, as may be the case in Reinforcement Learning, where it is not worthwhile to move data between CPU and GPU." 
    • "Deep convolutional neural networks (ConvNets) of 3-dimensional kernels allow joint modeling of spatiotemporal features. These networks have improved performance of video and volumetric image analysis, but have been limited in size due to the low memory ceiling of GPU hardware. Existing CPU implementations overcome this constraint but are impractically slow. Here we extend and optimize the faster Winograd-class of convolutional algorithms to the N-dimensional case and specifically for CPU hardware. First, we remove the need to manually hand-craft algorithms by exploiting the relaxed constraints and cheap sparse access of CPU memory. Second, we maximize CPU utilization and multicore scalability by transforming data matrices to be cache-aware, integer multiples of AVX vector widths. Treating 2-dimensional ConvNets as a special (and the least beneficial) case of our approach, we demonstrate a 5 to 25-fold improvement in throughput compared to previous state-of-the-art." 

Copyright

© 2001-2025 International Technology Ventures, Inc., All Rights Reserved.