Jump to content

TensorFlow: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
m Links
Rescuing 76 sources and tagging 0 as dead.) #IABot (v2.0.9.5
 
(45 intermediate revisions by 21 users not shown)
Line 7: Line 7:
| developer = [[Google Brain]] Team<ref name=Credits />
| developer = [[Google Brain]] Team<ref name=Credits />
| released = {{Start date and age|2015|11|09}}
| released = {{Start date and age|2015|11|09}}
| latest release version = {{LSR/wikidata}}
| latest release version = <!-- {{LSR/wikidata}} -->
| repo = {{URL|https://github.com/tensorflow/tensorflow}}
| repo = {{URL|https://github.com/tensorflow/tensorflow}}
| programming language = [[Python (programming language)|Python]], [[C++]], [[CUDA]]
| programming language = [[Python (programming language)|Python]], [[C++]], [[CUDA]]
| platform = [[Linux]], [[macOS]], [[Windows]], [[Android (operating system)|Android]], [[JavaScript]]<ref name="js">{{cite web |title=TensorFlow.js |url=https://js.tensorflow.org/faq/ |access-date=28 June 2018 <!--no longer in link, nor found in archive.org: |quote=TensorFlow.js has an API similar to the TensorFlow Python API, however it does not support all of the functionality of the TensorFlow Python API.-->}}</ref>
| platform = [[Linux]], [[macOS]], [[Windows]], [[Android (operating system)|Android]], [[JavaScript]]<ref name="js">{{cite web |title=TensorFlow.js |url=https://js.tensorflow.org/faq/ |access-date=28 June 2018 <!--no longer in link, nor found in archive.org: |quote=TensorFlow.js has an API similar to the TensorFlow Python API, however it does not support all of the functionality of the TensorFlow Python API.--> |archive-date=May 6, 2018 |archive-url=https://web.archive.org/web/20180506083002/https://js.tensorflow.org/faq/ |url-status=live }}</ref>
| genre = [[Machine learning]] [[Library (computing)|library]]
| genre = [[Machine learning]] [[Library (computing)|library]]
| license = [[Apache License 2.0]]
| license = [[Apache License 2.0]]
| website = {{URL|https://www.tensorflow.org}}
| website = {{URL|https://tensorflow.org}}
}}
}}
{{Machine learning}}
{{Machine learning}}


'''TensorFlow''' is a [[Free and open-source software|free and open-source]] [[Library (computing)|software library]] for [[machine learning]] and [[artificial intelligence]]. It can be used across a range of tasks but has a particular focus on [[Types of artificial neural networks#Training|training]] and [[Statistical inference|inference]] of [[deep neural networks]].<ref>{{Cite conference|conference=Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI ’16).|date=2016|title=TensorFlow: A System for Large-Scale Machine Learning|url=https://www.usenix.org/system/files/conference/osdi16/osdi16-abadi.pdf|last1=Abadi|first1=Martín|last2=Barham|first2=Paul|last3=Chen|first3=Jianmin|last4=Chen|first4=Zhifeng|last5=Davis|first5=Andy|last6=Dean|first6=Jeffrey|last7=Devin|first7=Matthieu|last8=Ghemawat|first8=Sanjay|last9=Irving|first9=Geoffrey|last10=Isard|first10=Michael|last11=Kudlur|first11=Manjunath|last12=Levenberg|first12=Josh|last13=Monga|first13=Rajat|last14=Moore|first14=Sherry|last15=Murray|first15=Derek G.|last16=Steiner|first16=Benoit|last17=Tucker|first17=Paul|last18=Vasudevan|first18=Vijay|last19=Warden|first19=Pete|last20=Wicke|first20=Martin|last21=Yu|first21=Yuan|last22=Zheng|first22=Xiaoqiang|arxiv=1605.08695}}</ref><ref name=YoutubeClip>{{cite AV media|url=https://www.youtube.com/watch?v=oZikw5k_2FM| archive-url=https://ghostarchive.org/varchive/youtube/20211111/oZikw5k_2FM| archive-date=2021-11-11 | url-status=live|title=TensorFlow: Open source machine learning|year= 2015|publisher=Google|ref={{harvid|Video clip by Google about TensorFlow|2015}}}}{{cbignore}} "It is machine learning software being used for various kinds of perceptual and language understanding tasks" – Jeffrey Dean, minute 0:47 / 2:17 from YouTube clip</ref>
'''TensorFlow''' is a [[Free and open-source software|free and open-source]] [[Library (computing)|software library]] for [[machine learning]] and [[artificial intelligence]]. It can be used across a range of tasks but has a particular focus on [[Types of artificial neural networks#Training|training]] and [[Statistical inference|inference]] of [[deep neural networks]].<ref>{{Cite conference|conference=Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI ’16).|date=2016|title=TensorFlow: A System for Large-Scale Machine Learning|url=https://www.usenix.org/system/files/conference/osdi16/osdi16-abadi.pdf|last1=Abadi|first1=Martín|last2=Barham|first2=Paul|last3=Chen|first3=Jianmin|last4=Chen|first4=Zhifeng|last5=Davis|first5=Andy|last6=Dean|first6=Jeffrey|last7=Devin|first7=Matthieu|last8=Ghemawat|first8=Sanjay|last9=Irving|first9=Geoffrey|last10=Isard|first10=Michael|last11=Kudlur|first11=Manjunath|last12=Levenberg|first12=Josh|last13=Monga|first13=Rajat|last14=Moore|first14=Sherry|last15=Murray|first15=Derek G.|last16=Steiner|first16=Benoit|last17=Tucker|first17=Paul|last18=Vasudevan|first18=Vijay|last19=Warden|first19=Pete|last20=Wicke|first20=Martin|last21=Yu|first21=Yuan|last22=Zheng|first22=Xiaoqiang|arxiv=1605.08695|access-date=October 26, 2020|archive-date=December 12, 2020|archive-url=https://web.archive.org/web/20201212042511/https://www.usenix.org/system/files/conference/osdi16/osdi16-abadi.pdf|url-status=live}}</ref><ref name=YoutubeClip>{{cite AV media|url=https://www.youtube.com/watch?v=oZikw5k_2FM| archive-url=https://ghostarchive.org/varchive/youtube/20211111/oZikw5k_2FM| archive-date=2021-11-11 | url-status=live|title=TensorFlow: Open source machine learning|year= 2015|publisher=Google|ref={{harvid|Video clip by Google about TensorFlow|2015}}}}{{cbignore}} "It is machine learning software being used for various kinds of perceptual and language understanding tasks" – Jeffrey Dean, minute 0:47 / 2:17 from YouTube clip</ref>


TensorFlow was developed by the [[Google Brain]] team for internal [[Google]] use in research and production.<ref>{{harvnb|Video clip by Google about TensorFlow|2015}} at minute 0:15/2:17</ref><ref>{{harvnb|Video clip by Google about TensorFlow|2015}} at minute 0:26/2:17</ref><ref>{{harvnb|Dean et al|2015|p=2}}</ref> The initial version was released under the [[Apache License 2.0]] in 2015.<ref name="Credits">{{cite web |title = Credits |url = https://tensorflow.org/about |website = TensorFlow.org |access-date = November 10, 2015 }}</ref><ref name="Metz-Nov9">{{cite web |last1 = Metz |first1 = Cade |title = Google Just Open Sourced TensorFlow, Its Artificial Intelligence Engine |url = https://www.wired.com/2015/11/google-open-sources-its-artificial-intelligence-engine/ |website = [[Wired (website)|Wired]] |access-date = November 10, 2015 |date = November 9, 2015 }}</ref> Google released the updated version of TensorFlow, named TensorFlow 2.0, in September 2019.<ref name=":12" />
It was developed by the [[Google Brain]] team for [[Google]]'s internal use in research and production.<ref>{{harvnb|Video clip by Google about TensorFlow|2015}} at minute 0:15/2:17</ref><ref>{{harvnb|Video clip by Google about TensorFlow|2015}} at minute 0:26/2:17</ref><ref>{{harvnb|Dean et al|2015|p=2}}</ref> The initial version was released under the [[Apache License 2.0]] in 2015.<ref name="Credits">{{cite web |title = Credits |url = https://tensorflow.org/about |website = TensorFlow.org |access-date = November 10, 2015 |archive-date = November 17, 2015 |archive-url = https://web.archive.org/web/20151117032147/https://tensorflow.org/about |url-status = live }}</ref><ref name="Metz-Nov9">{{cite web |last1 = Metz |first1 = Cade |title = Google Just Open Sourced TensorFlow, Its Artificial Intelligence Engine |url = https://www.wired.com/2015/11/google-open-sources-its-artificial-intelligence-engine/ |website = [[Wired (website)|Wired]] |access-date = November 10, 2015 |date = November 9, 2015 |archive-date = November 9, 2015 |archive-url = https://web.archive.org/web/20151109142618/https://www.wired.com/2015/11/google-open-sources-its-artificial-intelligence-engine/ |url-status = live }}</ref> Google released an updated version, TensorFlow 2.0, in September 2019.<ref name=":12" />


TensorFlow can be used in a wide variety of programming languages, including [[Python (programming language)|Python]], [[JavaScript]], [[C++]], and [[Java (programming language)|Java]].<ref name=":13">{{cite web|title=API Documentation|url=https://www.tensorflow.org/api_docs/|access-date=2018-06-27}}</ref> This flexibility lends itself to a range of applications in many different sectors.
TensorFlow can be used in a wide variety of programming languages, including [[Python (programming language)|Python]], [[JavaScript]], [[C++]], and [[Java (programming language)|Java]],<ref name=":13">{{cite web|title=API Documentation|url=https://www.tensorflow.org/api_docs/|access-date=2018-06-27|archive-date=November 16, 2015|archive-url=https://web.archive.org/web/20151116154736/https://www.tensorflow.org/api_docs/|url-status=live}},</ref> facilitating its use in a range of applications in many sectors.


== History ==
== History ==
=== DistBelief ===
=== DistBelief ===
Starting in 2011, Google Brain built DistBelief as a [[Proprietary software|proprietary]] [[machine learning]] system based on [[deep learning]] [[Artificial neural network|neural networks]]. Its use grew rapidly across diverse [[Alphabet Inc.|Alphabet]] companies in both research and commercial applications.<ref name=whitepaper2015>{{cite web |last1 = Dean |first1 = Jeff |last2 = Monga |first2 = Rajat |first3 = Sanjay |last3 = Ghemawat |display-authors = 2 |author-link1 = Jeff Dean (computer scientist) |title = TensorFlow: Large-scale machine learning on heterogeneous systems |url = http://download.tensorflow.org/paper/whitepaper2015.pdf |website = TensorFlow.org |publisher = Google Research |access-date = November 10, 2015 |date = November 9, 2015 |ref={{harvid|Dean et al|2015}}}}</ref><ref name=Perez>{{cite web |last1 = Perez |first1 = Sarah |title = Google Open-Sources The Machine Learning Tech Behind Google Photos Search, Smart Reply And More |url = https://techcrunch.com/2015/11/09/google-open-sources-the-machine-learning-tech-behind-google-photos-search-smart-reply-and-more/ |website = TechCrunch |access-date = November 11, 2015 |date = November 9, 2015 }}</ref> Google assigned multiple computer scientists, including [[Jeff Dean (computer scientist)|Jeff Dean]], to simplify and [[Code refactoring|refactor]] the codebase of DistBelief into a faster, more robust application-grade library, which became TensorFlow.<ref name=Oremus>{{cite web |last1 = Oremus |first1 = Will |title = What Is TensorFlow, and Why Is Google So Excited About It? |url = https://www.slate.com/blogs/future_tense/2015/11/09/google_s_tensorflow_is_open_source_and_it_s_about_to_be_a_huge_huge_deal.html |website = Slate |access-date = November 11, 2015 |date = November 9, 2015 }}</ref> In 2009, the team, led by [[Geoffrey Hinton]], had implemented generalized [[backpropagation]] and other improvements which allowed generation of [[neural network]]s with substantially higher accuracy, for instance a 25% reduction in errors in [[speech recognition]].<ref name=Ward-Bailey>{{cite web |last1 = Ward-Bailey |first1 = Jeff |title = Google chairman: We're making 'real progress' on artificial intelligence |url = https://www.csmonitor.com/Technology/2015/0914/Google-chairman-We-re-making-real-progress-on-artificial-intelligence |website = CSMonitor |access-date = November 25, 2015 |date = November 25, 2015 }}</ref>
Starting in 2011, Google Brain built DistBelief as a [[Proprietary software|proprietary]] [[machine learning]] system based on [[deep learning]] [[Artificial neural network|neural networks]]. Its use grew rapidly across diverse [[Alphabet Inc.|Alphabet]] companies in both research and commercial applications.<ref name=whitepaper2015>{{cite web |last1 = Dean |first1 = Jeff |last2 = Monga |first2 = Rajat |first3 = Sanjay |last3 = Ghemawat |display-authors = 2 |author-link1 = Jeff Dean (computer scientist) |title = TensorFlow: Large-scale machine learning on heterogeneous systems |url = http://download.tensorflow.org/paper/whitepaper2015.pdf |website = TensorFlow.org |publisher = Google Research |access-date = November 10, 2015 |date = November 9, 2015 |ref = {{harvid|Dean et al|2015}} |archive-date = November 20, 2015 |archive-url = https://web.archive.org/web/20151120004649/http://download.tensorflow.org/paper/whitepaper2015.pdf |url-status = live }}</ref><ref name=Perez>{{cite web |last1 = Perez |first1 = Sarah |title = Google Open-Sources The Machine Learning Tech Behind Google Photos Search, Smart Reply And More |url = https://techcrunch.com/2015/11/09/google-open-sources-the-machine-learning-tech-behind-google-photos-search-smart-reply-and-more/ |website = TechCrunch |access-date = November 11, 2015 |date = November 9, 2015 |archive-date = November 9, 2015 |archive-url = https://web.archive.org/web/20151109150138/https://techcrunch.com/2015/11/09/google-open-sources-the-machine-learning-tech-behind-google-photos-search-smart-reply-and-more/ |url-status = live }}</ref> Google assigned multiple computer scientists, including [[Jeff Dean (computer scientist)|Jeff Dean]], to simplify and [[Code refactoring|refactor]] the codebase of DistBelief into a faster, more robust application-grade library, which became TensorFlow.<ref name=Oremus>{{cite web |last1 = Oremus |first1 = Will |title = What Is TensorFlow, and Why Is Google So Excited About It? |url = https://www.slate.com/blogs/future_tense/2015/11/09/google_s_tensorflow_is_open_source_and_it_s_about_to_be_a_huge_huge_deal.html |website = Slate |access-date = November 11, 2015 |date = November 9, 2015 |archive-date = November 10, 2015 |archive-url = https://web.archive.org/web/20151110021146/https://www.slate.com/blogs/future_tense/2015/11/09/google_s_tensorflow_is_open_source_and_it_s_about_to_be_a_huge_huge_deal.html |url-status = live }}</ref> In 2009, the team, led by [[Geoffrey Hinton]], had implemented generalized [[backpropagation]] and other improvements, which allowed generation of [[neural network]]s with substantially higher accuracy, for instance a 25% reduction in errors in [[speech recognition]].<ref name=Ward-Bailey>{{cite web |last1 = Ward-Bailey |first1 = Jeff |title = Google chairman: We're making 'real progress' on artificial intelligence |url = https://www.csmonitor.com/Technology/2015/0914/Google-chairman-We-re-making-real-progress-on-artificial-intelligence |website = CSMonitor |access-date = November 25, 2015 |date = November 25, 2015 |archive-date = September 16, 2015 |archive-url = https://web.archive.org/web/20150916223243/https://www.csmonitor.com/Technology/2015/0914/Google-chairman-We-re-making-real-progress-on-artificial-intelligence |url-status = live }}</ref>


=== TensorFlow ===
=== TensorFlow ===
TensorFlow is Google Brain's second-generation system. Version 1.0.0 was released on February 11, 2017.<ref>{{cite web|url=https://github.com/tensorflow/tensorflow/blob/07bb8ea2379bd459832b23951fb20ec47f3fdbd4/RELEASE.md|title=Tensorflow Release 1.0.0|website=[[GitHub]]}}</ref> While the [[reference implementation]] runs on single devices, TensorFlow can run on multiple [[central processing unit|CPUs]] and [[GPU]]s (with optional [[CUDA]] and [[SYCL]] extensions for [[general-purpose computing on graphics processing units]]).<ref name=Metz-Nov10>{{cite news |last1 = Metz |first1 = Cade |title = TensorFlow, Google's Open Source AI, Points to a Fast-Changing Hardware World |url = https://www.wired.com/2015/11/googles-open-source-ai-tensorflow-signals-fast-changing-hardware-world/ |access-date = November 11, 2015 |magazine = Wired |date = November 10, 2015 }}</ref> TensorFlow is available on 64-bit [[Linux]], [[macOS]], [[Windows]], and mobile computing platforms including [[Android (operating system)|Android]] and [[iOS]].
TensorFlow is Google Brain's second-generation system. Version 1.0.0 was released on February 11, 2017.<ref>{{cite journal|url=https://github.com/tensorflow/tensorflow/blob/07bb8ea2379bd459832b23951fb20ec47f3fdbd4/RELEASE.md|title=Tensorflow Release 1.0.0|website=[[GitHub]]|year=2022|doi=10.5281/zenodo.4724125|author1=TensorFlow Developers|access-date=July 24, 2017|archive-date=February 27, 2021|archive-url=https://web.archive.org/web/20210227171533/https://github.com/tensorflow/tensorflow/blob/07bb8ea2379bd459832b23951fb20ec47f3fdbd4/RELEASE.md|url-status=live}}</ref> While the [[reference implementation]] runs on single devices, TensorFlow can run on multiple [[central processing unit|CPUs]] and [[GPU]]s (with optional [[CUDA]] and [[SYCL]] extensions for [[general-purpose computing on graphics processing units]]).<ref name=Metz-Nov10>{{cite news |last1 = Metz |first1 = Cade |title = TensorFlow, Google's Open Source AI, Points to a Fast-Changing Hardware World |url = https://www.wired.com/2015/11/googles-open-source-ai-tensorflow-signals-fast-changing-hardware-world/ |access-date = November 11, 2015 |magazine = Wired |date = November 10, 2015 |archive-date = November 11, 2015 |archive-url = https://web.archive.org/web/20151111163641/http://www.wired.com/2015/11/googles-open-source-ai-tensorflow-signals-fast-changing-hardware-world/ |url-status = live }}</ref> TensorFlow is available on 64-bit [[Linux]], [[macOS]], [[Windows]], and mobile computing platforms including [[Android (operating system)|Android]] and [[iOS]].{{Citation needed|date=March 2024}}


Its flexible architecture allows for the easy deployment of computation across a variety of platforms (CPUs, GPUs, [[Tensor processing unit|TPU]]s), and from desktops to clusters of servers to mobile and edge devices.
Its flexible architecture allows for the easy deployment of computation across a variety of platforms (CPUs, GPUs, [[Tensor processing unit|TPU]]s), and from desktops to clusters of servers to mobile and [[Edge device|edge devices]].


TensorFlow computations are expressed as [[State (computer science)|stateful]] [[dataflow programming|dataflow]] [[directed graph|graphs]]. The name TensorFlow derives from the operations that such neural networks perform on multidimensional data arrays, which are referred to as ''[[tensor]]s''. During the [[Google I/O|Google I/O Conference]] in June 2016, Jeff Dean stated that 1,500 repositories on [[GitHub]] mentioned TensorFlow, of which only 5 were from Google.<ref name="1500repo's">[https://www.youtube.com/watch?v=Rnm83GqgqPE Machine Learning: Google I/O 2016 Minute 07:30/44:44 ] accessdate=2016-06-05</ref>
TensorFlow computations are expressed as [[State (computer science)|stateful]] [[dataflow programming|dataflow]] [[directed graph|graphs]]. The name TensorFlow derives from the operations that such neural networks perform on multidimensional data arrays, which are referred to as ''[[tensor]]s''.<ref>{{cite web | url = https://www.tensorflow.org/guide/tensor | title = Introduction to tensors | publisher = tensorflow.org | access-date = 3 March 2024 | archive-date = May 26, 2024 | archive-url = https://web.archive.org/web/20240526120806/https://www.tensorflow.org/guide/tensor | url-status = live }}</ref> During the [[Google I/O|Google I/O Conference]] in June 2016, Jeff Dean stated that 1,500 repositories on [[GitHub]] mentioned TensorFlow, of which only 5 were from Google.<ref name="1500repo's">[https://www.youtube.com/watch?v=Rnm83GqgqPE Machine Learning: Google I/O 2016 Minute 07:30/44:44 ] {{Webarchive|url=https://web.archive.org/web/20161221095258/https://www.youtube.com/watch?v=Rnm83GqgqPE |date=December 21, 2016 }} accessdate=2016-06-05</ref>


In March 2018, Google announced TensorFlow.js version 1.0 for machine learning in [[JavaScript]].<ref>{{cite web|url=https://medium.com/tensorflow/introducing-tensorflow-js-machine-learning-in-javascript-bf3eab376db|title=Introducing TensorFlow.js: Machine Learning in Javascript|last=TensorFlow|date=2018-03-30|website=Medium|access-date=2019-05-24|archive-date=March 30, 2018|archive-url=https://web.archive.org/web/20180330180144/https://medium.com/tensorflow/introducing-tensorflow-js-machine-learning-in-javascript-bf3eab376db|url-status=live}}</ref>
In December 2017, developers from Google, Cisco, RedHat, CoreOS, and CaiCloud introduced [[Kubeflow]] at a conference. Kubeflow allows operation and deployment of TensorFlow on [[Kubernetes]].


In March 2018, Google announced TensorFlow.js version 1.0 for machine learning in [[JavaScript]].<ref>{{cite web|url=https://medium.com/tensorflow/introducing-tensorflow-js-machine-learning-in-javascript-bf3eab376db|title=Introducing TensorFlow.js: Machine Learning in Javascript|last=TensorFlow|date=2018-03-30|website=Medium|access-date=2019-05-24}}</ref>
In Jan 2019, Google announced TensorFlow 2.0.<ref>{{cite web|url=https://medium.com/tensorflow/whats-coming-in-tensorflow-2-0-d3663832e9b8|title=What's coming in TensorFlow 2.0|last=TensorFlow|date=2019-01-14|website=Medium|access-date=2019-05-24|archive-date=January 14, 2019|archive-url=https://web.archive.org/web/20190114181937/https://medium.com/tensorflow/whats-coming-in-tensorflow-2-0-d3663832e9b8|url-status=live}}</ref> It became officially available in September 2019.<ref name=":12">{{cite web|url=https://medium.com/tensorflow/tensorflow-2-0-is-now-available-57d706c2a9ab|title=TensorFlow 2.0 is now available!|last=TensorFlow|date=2019-09-30|website=Medium|access-date=2019-11-24|archive-date=October 7, 2019|archive-url=https://web.archive.org/web/20191007214705/https://medium.com/tensorflow/tensorflow-2-0-is-now-available-57d706c2a9ab|url-status=live}}</ref>


In Jan 2019, Google announced TensorFlow 2.0.<ref>{{cite web|url=https://medium.com/tensorflow/whats-coming-in-tensorflow-2-0-d3663832e9b8|title=What's coming in TensorFlow 2.0|last=TensorFlow|date=2019-01-14|website=Medium|access-date=2019-05-24}}</ref> It became officially available in Sep 2019.<ref name=":12">{{cite web|url=https://medium.com/tensorflow/tensorflow-2-0-is-now-available-57d706c2a9ab|title=TensorFlow 2.0 is now available!|last=TensorFlow|date=2019-09-30|website=Medium|access-date=2019-11-24}}</ref>
In May 2019, Google announced TensorFlow Graphics for deep learning in computer graphics.<ref>{{cite web|url=https://medium.com/tensorflow/introducing-tensorflow-graphics-computer-graphics-meets-deep-learning-c8e3877b7668|title=Introducing TensorFlow Graphics: Computer Graphics Meets Deep Learning|last=TensorFlow|date=2019-05-09|website=Medium|access-date=2019-05-24|archive-date=May 9, 2019|archive-url=https://web.archive.org/web/20190509204620/https://medium.com/tensorflow/introducing-tensorflow-graphics-computer-graphics-meets-deep-learning-c8e3877b7668|url-status=live}}</ref>

In May 2019, Google announced TensorFlow Graphics for deep learning in computer graphics.<ref>{{cite web|url=https://medium.com/tensorflow/introducing-tensorflow-graphics-computer-graphics-meets-deep-learning-c8e3877b7668|title=Introducing TensorFlow Graphics: Computer Graphics Meets Deep Learning|last=TensorFlow|date=2019-05-09|website=Medium|access-date=2019-05-24}}</ref>


=== Tensor processing unit (TPU) ===
=== Tensor processing unit (TPU) ===
{{main|Tensor processing unit}}
{{main|Tensor processing unit}}
In May 2016, Google announced its [[Tensor processing unit]] (TPU), an [[application-specific integrated circuit]] (ASIC, a hardware chip) built specifically for [[machine learning]] and tailored for TensorFlow. A TPU is a programmable [[AI accelerator (computer hardware)|AI accelerator]] designed to provide high [[throughput]] of low-precision [[arithmetic]] (e.g., [[8-bit]]), and oriented toward using or running models rather than [[Supervised learning|training]] them. Google announced they had been running TPUs inside their data centers for more than a year, and had found them to deliver an [[order of magnitude]] better-optimized [[performance per watt]] for machine learning.<ref>{{cite web |author-link1=Norman Jouppi |last1 = Jouppi |first1 = Norm |title = Google supercharges machine learning tasks with TPU custom chip |url = https://cloudplatform.googleblog.com/2016/05/Google-supercharges-machine-learning-tasks-with-custom-chip.html |website = Google Cloud Platform Blog |access-date = May 19, 2016 }}</ref>
In May 2016, Google announced its [[Tensor processing unit]] (TPU), an [[application-specific integrated circuit]] ([[Application-specific integrated circuit|ASIC]], a hardware chip) built specifically for machine learning and tailored for TensorFlow. A TPU is a programmable [[AI accelerator (computer hardware)|AI accelerator]] designed to provide high [[throughput]] of low-precision [[arithmetic]] (e.g., [[8-bit]]), and oriented toward using or running models rather than [[Supervised learning|training]] them. Google announced they had been running TPUs inside their data centers for more than a year, and had found them to deliver an [[order of magnitude]] better-optimized [[performance per watt]] for machine learning.<ref>{{cite web |author-link1 = Norman Jouppi |last1 = Jouppi |first1 = Norm |title = Google supercharges machine learning tasks with TPU custom chip |url = https://cloudplatform.googleblog.com/2016/05/Google-supercharges-machine-learning-tasks-with-custom-chip.html |website = Google Cloud Platform Blog |access-date = May 19, 2016 |archive-date = May 18, 2016 |archive-url = https://web.archive.org/web/20160518201516/https://cloudplatform.googleblog.com/2016/05/Google-supercharges-machine-learning-tasks-with-custom-chip.html |url-status = live }}</ref>


In May 2017, Google announced the second-generation, as well as the availability of the TPUs in [[Google Compute Engine]].<ref>{{cite news|url=https://www.blog.google/topics/google-cloud/google-cloud-offer-tpus-machine-learning/|title=Build and train machine learning models on our new Google Cloud TPUs|date=May 17, 2017|work=Google|access-date=May 18, 2017}}</ref> The second-generation TPUs deliver up to 180 teraflops of performance, and when organized into clusters of 64 TPUs, provide up to 11.5 petaflops.
In May 2017, Google announced the second-generation, as well as the availability of the TPUs in [[Google Compute Engine]].<ref>{{cite news|url=https://www.blog.google/topics/google-cloud/google-cloud-offer-tpus-machine-learning/|title=Build and train machine learning models on our new Google Cloud TPUs|date=May 17, 2017|work=Google|access-date=May 18, 2017|archive-date=May 17, 2017|archive-url=https://web.archive.org/web/20170517182035/https://blog.google/topics/google-cloud/google-cloud-offer-tpus-machine-learning/|url-status=live}}</ref> The second-generation TPUs deliver up to 180 [[FLOPS|teraflops]] of performance, and when organized into clusters of 64 TPUs, provide up to 11.5 [[FLOPS|petaflops]].{{Citation needed|date=March 2024}}


In May 2018, Google announced the third-generation TPUs delivering up to 420 teraflops of performance and 128 GB high [[Bandwidth (computing)|bandwidth]] memory (HBM). Cloud TPU v3 Pods offer 100+ petaflops of performance and 32 TB HBM.<ref>{{cite web|url=https://cloud.google.com/tpu/|title=Cloud TPU|website=Google Cloud|access-date=2019-05-24}}</ref>
In May 2018, Google announced the third-generation TPUs delivering up to 420 [[FLOPS|teraflops]] of performance and 128 GB high [[Bandwidth (computing)|bandwidth]] memory (HBM). Cloud TPU v3 Pods offer 100+ [[FLOPS|petaflops]] of performance and 32 TB HBM.<ref>{{cite web|url=https://cloud.google.com/tpu/|title=Cloud TPU|website=Google Cloud|access-date=2019-05-24|archive-date=May 17, 2017|archive-url=https://web.archive.org/web/20170517174135/https://cloud.google.com/tpu/|url-status=live}}</ref>


In February 2018, Google announced that they were making TPUs available in beta on the [[Google Cloud Platform]].<ref>{{cite news|url=https://cloudplatform.googleblog.com/2018/02/Cloud-TPU-machine-learning-accelerators-now-available-in-beta.html|title=Cloud TPU machine learning accelerators now available in beta|work=Google Cloud Platform Blog|access-date=2018-02-12}}</ref>
In February 2018, Google announced that they were making TPUs available in beta on the [[Google Cloud Platform]].<ref>{{cite news|url=https://cloudplatform.googleblog.com/2018/02/Cloud-TPU-machine-learning-accelerators-now-available-in-beta.html|title=Cloud TPU machine learning accelerators now available in beta|work=Google Cloud Platform Blog|access-date=2018-02-12|archive-date=February 12, 2018|archive-url=https://web.archive.org/web/20180212141508/https://cloudplatform.googleblog.com/2018/02/Cloud-TPU-machine-learning-accelerators-now-available-in-beta.html|url-status=live}}</ref>


=== Edge TPU ===
=== Edge TPU ===
In July 2018, the Edge TPU was announced. Edge TPU is Google's purpose-built [[Application-specific integrated circuit|ASIC]] chip designed to run TensorFlow Lite machine learning (ML) models on small client computing devices such as smartphones<ref>{{cite web|url=https://beebom.com/google-announces-edge-tpu-cloud-iot-edge-at-cloud-next-2018/|title=Google Announces Edge TPU, Cloud IoT Edge at Cloud Next 2018|last=Kundu|first=Kishalaya|date=2018-07-26|website=Beebom|language=en-US|access-date=2019-02-02}}</ref> known as [[edge computing]].
In July 2018, the Edge TPU was announced. Edge TPU is Google's purpose-built [[Application-specific integrated circuit|ASIC]] chip designed to run TensorFlow Lite machine learning (ML) models on small client computing devices such as smartphones<ref>{{cite web|url=https://beebom.com/google-announces-edge-tpu-cloud-iot-edge-at-cloud-next-2018/|title=Google Announces Edge TPU, Cloud IoT Edge at Cloud Next 2018|last=Kundu|first=Kishalaya|date=2018-07-26|website=Beebom|language=en-US|access-date=2019-02-02|archive-date=May 26, 2024|archive-url=https://web.archive.org/web/20240526120854/https://beebom.com/google-announces-edge-tpu-cloud-iot-edge-at-cloud-next-2018/|url-status=live}}</ref> known as [[edge computing]].


=== TensorFlow Lite ===
=== TensorFlow Lite ===
In May 2017, Google announced a software stack specifically for mobile development, TensorFlow Lite.<ref>{{cite web|url=https://www.theverge.com/2017/5/17/15645908/google-ai-tensorflowlite-machine-learning-announcement-io-2017|title=Google's new machine learning framework is going to put more AI on your phone|date=May 17, 2017}}</ref> In January 2019, TensorFlow team released a developer preview of the mobile GPU inference engine with OpenGL ES 3.1 Compute Shaders on Android devices and Metal Compute Shaders on iOS devices.<ref>{{cite web|url=https://medium.com/tensorflow/tensorflow-lite-now-faster-with-mobile-gpus-developer-preview-e15797e6dee7|title=TensorFlow Lite Now Faster with Mobile GPUs (Developer Preview)|last=TensorFlow|date=2019-01-16|website=Medium|access-date=2019-05-24}}</ref> In May 2019, Google announced that their TensorFlow Lite Micro (also known as TensorFlow Lite for Microcontrollers) and [[Arm Holdings|ARM's]] uTensor would be merging.<ref>{{cite web|url=https://os.mbed.com/blog/entry/uTensor-and-Tensor-Flow-Announcement/|title=uTensor and Tensor Flow Announcement {{!}} Mbed|website=os.mbed.com|access-date=2019-05-24}}</ref>
In May 2017, Google announced a software stack specifically for mobile development, TensorFlow Lite.<ref>{{cite web|url=https://www.theverge.com/2017/5/17/15645908/google-ai-tensorflowlite-machine-learning-announcement-io-2017|title=Google's new machine learning framework is going to put more AI on your phone|date=May 17, 2017|access-date=May 19, 2017|archive-date=May 17, 2017|archive-url=https://web.archive.org/web/20170517233339/https://www.theverge.com/2017/5/17/15645908/google-ai-tensorflowlite-machine-learning-announcement-io-2017|url-status=live}}</ref> In January 2019, the TensorFlow team released a developer preview of the mobile GPU inference engine with OpenGL ES 3.1 Compute Shaders on Android devices and Metal Compute Shaders on iOS devices.<ref>{{cite web|url=https://medium.com/tensorflow/tensorflow-lite-now-faster-with-mobile-gpus-developer-preview-e15797e6dee7|title=TensorFlow Lite Now Faster with Mobile GPUs (Developer Preview)|last=TensorFlow|date=2019-01-16|website=Medium|access-date=2019-05-24|archive-date=January 16, 2019|archive-url=https://web.archive.org/web/20190116183459/https://medium.com/tensorflow/tensorflow-lite-now-faster-with-mobile-gpus-developer-preview-e15797e6dee7|url-status=live}}</ref> In May 2019, Google announced that their TensorFlow Lite Micro (also known as TensorFlow Lite for Microcontrollers) and [[Arm Holdings|ARM's]] uTensor would be merging.<ref>{{cite web|url=https://os.mbed.com/blog/entry/uTensor-and-Tensor-Flow-Announcement/|title=uTensor and Tensor Flow Announcement {{!}} Mbed|website=os.mbed.com|access-date=2019-05-24|archive-date=May 9, 2019|archive-url=https://web.archive.org/web/20190509195115/https://os.mbed.com/blog/entry/uTensor-and-Tensor-Flow-Announcement/|url-status=live}}</ref>

=== Pixel Visual Core (PVC) ===
In October 2017, Google released the [[Google Pixel 2]] which featured their [[Pixel Visual Core]] (PVC), a fully programmable [[image processor|image]], [[vision processing unit|vision]] and [[AI accelerator|AI]] processor for mobile devices. The PVC supports TensorFlow for machine learning (and [[Halide (programming language)|Halide]] for image processing).


=== TensorFlow 2.0 ===
=== TensorFlow 2.0 ===


As TensorFlow's market share among research papers was declining to the advantage of [[PyTorch]],<ref name=":9">{{cite web|url=https://thegradient.pub/state-of-ml-frameworks-2019-pytorch-dominates-research-tensorflow-dominates-industry/|title=The State of Machine Learning Frameworks in 2019|publisher=The Gradient|first1=Horace|last1=He|date=10 October 2019|access-date=22 May 2020}}</ref> the TensorFlow Team announced a release of a new major version of the library in September 2019. TensorFlow 2.0 introduced many changes, the most significant being TensorFlow eager, which changed the automatic differentiation scheme from the static computational graph, to the "Define-by-Run" scheme originally made popular by [[Chainer]] and later [[PyTorch]].<ref name=":9" /> Other major changes included removal of old libraries, cross-compatibility between trained models on different versions of TensorFlow, and significant improvements to the performance on GPU.<ref>{{cite web|url=https://blog.tensorflow.org/2019/09/tensorflow-20-is-now-available.html|title=TensorFlow 2.0 is now available! |publisher=TensorFlow Blog|date=30 September 2019|access-date=22 May 2020}}</ref>{{Primary source inline|date=August 2020}}
As TensorFlow's market share among research papers was declining to the advantage of [[PyTorch]],<ref name=":9">{{cite web|url=https://thegradient.pub/state-of-ml-frameworks-2019-pytorch-dominates-research-tensorflow-dominates-industry/|title=The State of Machine Learning Frameworks in 2019|publisher=The Gradient|first1=Horace|last1=He|date=10 October 2019|access-date=22 May 2020|archive-date=October 10, 2019|archive-url=https://web.archive.org/web/20191010161542/https://thegradient.pub/state-of-ml-frameworks-2019-pytorch-dominates-research-tensorflow-dominates-industry/|url-status=live}}</ref> the TensorFlow Team announced a release of a new major version of the library in September 2019. TensorFlow 2.0 introduced many changes, the most significant being TensorFlow eager, which changed the automatic differentiation scheme from the static computational graph to the "Define-by-Run" scheme originally made popular by [[Chainer]] and later [[PyTorch]].<ref name=":9" /> Other major changes included removal of old libraries, cross-compatibility between trained models on different versions of TensorFlow, and significant improvements to the performance on GPU.<ref>{{cite web|url=https://blog.tensorflow.org/2019/09/tensorflow-20-is-now-available.html|title=TensorFlow 2.0 is now available!|publisher=TensorFlow Blog|date=30 September 2019|access-date=22 May 2020|archive-date=October 30, 2019|archive-url=https://web.archive.org/web/20191030134434/https://blog.tensorflow.org/2019/09/tensorflow-20-is-now-available.html|url-status=live}}</ref>{{Primary source inline|date=August 2020}}


== Features ==
== Features ==


=== AutoDifferentiation ===
=== AutoDifferentiation ===
[[Automatic differentiation|AutoDifferentiation]] is the process of automatically calculating the gradient vector of a model with respect to each of its parameters. With this feature, TensorFlow can automatically compute the gradients for the parameters in a model, which is useful to algorithms such as [[backpropagation]] which require gradients to optimize performance.<ref name=":0">{{Cite web|title=Introduction to gradients and automatic differentiation|url=https://www.tensorflow.org/guide/autodiff|access-date=2021-11-04|website=TensorFlow|language=en}}</ref> To do so, the framework must keep track of the order of operations done to the input Tensors in a model, and then compute the gradients with respect to the appropriate parameters.<ref name=":0" />
[[Automatic differentiation|AutoDifferentiation]] is the process of automatically calculating the gradient vector of a model with respect to each of its parameters. With this feature, TensorFlow can automatically compute the gradients for the parameters in a model, which is useful to algorithms such as [[backpropagation]] which require gradients to optimize performance.<ref name=":0">{{Cite web|title=Introduction to gradients and automatic differentiation|url=https://www.tensorflow.org/guide/autodiff|access-date=2021-11-04|website=TensorFlow|language=en|archive-date=October 28, 2021|archive-url=https://web.archive.org/web/20211028054417/https://www.tensorflow.org/guide/autodiff|url-status=live}}</ref> To do so, the framework must keep track of the order of operations done to the input Tensors in a model, and then compute the gradients with respect to the appropriate parameters.<ref name=":0" />


=== Eager execution ===
=== Eager execution ===
TensorFlow includes an “eager execution” mode, which means that operations are evaluated immediately as opposed to being added to a computational graph which is executed later.<ref name=":3">{{Cite web|title=Eager execution {{!}} TensorFlow Core|url=https://www.tensorflow.org/guide/eager|access-date=2021-11-04|website=TensorFlow|language=en}}</ref> Code executed eagerly can be examined step-by step-through a debugger, since data is augmented at each line of code rather than later in a computational graph.<ref name=":3" /> This execution paradigm is considered to be easier to debug because of its step by step transparency.<ref name=":3" />
TensorFlow includes an “eager execution” mode, which means that operations are evaluated immediately as opposed to being added to a computational graph which is executed later.<ref name=":3">{{Cite web|title=Eager execution {{!}} TensorFlow Core|url=https://www.tensorflow.org/guide/eager|access-date=2021-11-04|website=TensorFlow|language=en|archive-date=November 4, 2021|archive-url=https://web.archive.org/web/20211104011333/https://www.tensorflow.org/guide/eager|url-status=live}}</ref> Code executed eagerly can be examined step-by step-through a debugger, since data is augmented at each line of code rather than later in a computational graph.<ref name=":3" /> This execution paradigm is considered to be easier to debug because of its step by step transparency.<ref name=":3" />


=== Distribute ===
=== Distribute ===
In both eager and graph executions, TensorFlow provides an API for distributing computation across multiple devices with various distribution strategies.<ref name=":4">{{Cite web|title=Module: tf.distribute {{!}} TensorFlow Core v2.6.1|url=https://www.tensorflow.org/api_docs/python/tf/distribute|access-date=2021-11-04|website=TensorFlow|language=en}}</ref> This [[distributed computing]] can often speed up the execution of training and evaluating of TensorFlow models and is a common practice in the field of AI.<ref name=":4" /><ref>{{Cite book|last=Sigeru.|first=Omatu|url=http://worldcat.org/oclc/980886715|title=Distributed Computing and Artificial Intelligence, 11th International Conference|date=2014|publisher=Springer International Publishing|isbn=978-3-319-07593-8|oclc=980886715}}</ref>
In both eager and graph executions, TensorFlow provides an API for distributing computation across multiple devices with various distribution strategies.<ref name=":4">{{Cite web|title=Module: tf.distribute {{!}} TensorFlow Core v2.6.1|url=https://www.tensorflow.org/api_docs/python/tf/distribute|access-date=2021-11-04|website=TensorFlow|language=en|archive-date=May 26, 2024|archive-url=https://web.archive.org/web/20240526120808/https://www.tensorflow.org/api_docs/python/tf/distribute|url-status=live}}</ref> This [[distributed computing]] can often speed up the execution of training and evaluating of TensorFlow models and is a common practice in the field of AI.<ref name=":4" /><ref>{{Cite book|last=Sigeru.|first=Omatu|url=http://worldcat.org/oclc/980886715|title=Distributed Computing and Artificial Intelligence, 11th International Conference|date=2014|publisher=Springer International Publishing|isbn=978-3-319-07593-8|oclc=980886715|access-date=November 4, 2021|archive-date=May 26, 2024|archive-url=https://web.archive.org/web/20240526120810/https://search.worldcat.org/title/980886715|url-status=live}}</ref>


=== Losses ===
=== Losses ===
To train and assess models, TensorFlow provides a set of [[loss function]]s (also known as [[Mathematical optimization|cost functions]]).<ref name=":5">{{Cite web|title=Module: tf.losses {{!}} TensorFlow Core v2.6.1|url=https://www.tensorflow.org/api_docs/python/tf/losses|access-date=2021-11-04|website=TensorFlow|language=en}}</ref> Some popular examples include [[mean squared error]] (MSE) and [[Cross entropy|binary cross entropy]] (BCE).<ref name=":5" /> These loss functions compute the “error” or “difference” between a model’s output and the expected output (more broadly, the difference between two tensors). For different datasets and models, different losses are used to prioritize certain aspects of performance.
To train and assess models, TensorFlow provides a set of [[loss function]]s (also known as [[Mathematical optimization|cost functions]]).<ref name=":5">{{Cite web|title=Module: tf.losses {{!}} TensorFlow Core v2.6.1|url=https://www.tensorflow.org/api_docs/python/tf/losses|access-date=2021-11-04|website=TensorFlow|language=en|archive-date=October 27, 2021|archive-url=https://web.archive.org/web/20211027133546/https://www.tensorflow.org/api_docs/python/tf/losses|url-status=live}}</ref> Some popular examples include [[mean squared error]] (MSE) and [[Cross entropy|binary cross entropy]] (BCE).<ref name=":5" />


=== Metrics ===
=== Metrics ===
In order to assess the performance of machine learning models, TensorFlow gives API access to commonly used metrics. Examples include various accuracy metrics (binary, categorical, sparse categorical) along with other metrics such as [[Precision and recall|Precision, Recall]], and [[Jaccard index|Intersection-over-Union]] (IoU).<ref>{{Cite web|title=Module: tf.metrics {{!}} TensorFlow Core v2.6.1|url=https://www.tensorflow.org/api_docs/python/tf/metrics|access-date=2021-11-04|website=TensorFlow|language=en}}</ref>
In order to assess the performance of machine learning models, TensorFlow gives API access to commonly used metrics. Examples include various accuracy metrics (binary, categorical, sparse categorical) along with other metrics such as [[Precision and recall|Precision, Recall]], and [[Jaccard index|Intersection-over-Union]] (IoU).<ref>{{Cite web|title=Module: tf.metrics {{!}} TensorFlow Core v2.6.1|url=https://www.tensorflow.org/api_docs/python/tf/metrics|access-date=2021-11-04|website=TensorFlow|language=en|archive-date=November 4, 2021|archive-url=https://web.archive.org/web/20211104011333/https://www.tensorflow.org/api_docs/python/tf/metrics|url-status=live}}</ref>


=== TF.nn ===
=== TF.nn ===
TensorFlow.nn is a module for executing primitive [[Artificial neural network|neural network]] operations on models.<ref name=":10">{{Cite web|title=Module: tf.nn {{!}} TensorFlow Core v2.7.0|url=https://www.tensorflow.org/api_docs/python/tf/nn|access-date=2021-11-06|website=TensorFlow|language=en}}</ref> Some of these operations include variations of [[Convolutional neural network|convolutions]] (1/2/3D, Atrous, depthwise), [[activation function]]s ([[Softmax function|Softmax]], [[Rectifier (neural networks)|RELU]], GELU, [[Sigmoid function|Sigmoid]], etc.) and their variations, and other Tensor operations ([[Max pooling|max-pooling]], bias-add, etc.).<ref name=":10" />
TensorFlow.nn is a module for executing primitive [[Artificial neural network|neural network]] operations on models.<ref name=":10">{{Cite web|title=Module: tf.nn {{!}} TensorFlow Core v2.7.0|url=https://www.tensorflow.org/api_docs/python/tf/nn|access-date=2021-11-06|website=TensorFlow|language=en|archive-date=May 26, 2024|archive-url=https://web.archive.org/web/20240526120809/https://www.tensorflow.org/api_docs/python/tf/nn|url-status=live}}</ref> Some of these operations include variations of [[Convolutional neural network|convolutions]] (1/2/3D, Atrous, depthwise), [[activation function]]s ([[Softmax function|Softmax]], [[Rectifier (neural networks)|RELU]], GELU, [[Sigmoid function|Sigmoid]], etc.) and their variations, and other operations ([[Max pooling|max-pooling]], bias-add, etc.).<ref name=":10" />


=== Optimizers ===
=== Optimizers ===
TensorFlow offers a set of optimizers for training neural networks, including [[Adam (optimization algorithm)|ADAM]], [[Adagrad|ADAGRAD]], and [[Stochastic gradient descent|Stochastic Gradient Descent]] (SGD).<ref name=":11">{{Cite web|title=Module: tf.optimizers {{!}} TensorFlow Core v2.7.0|url=https://www.tensorflow.org/api_docs/python/tf/optimizers|access-date=2021-11-06|website=TensorFlow|language=en}}</ref> When training a model, different optimizers offer different modes of parameter tuning, often affecting a model’s convergence and performance.<ref>{{Cite journal|last1=Dogo|first1=E. M.|last2=Afolabi|first2=O. J.|last3=Nwulu|first3=N. I.|last4=Twala|first4=B.|last5=Aigbavboa|first5=C. O.|date=December 2018|title=A Comparative Analysis of Gradient Descent-Based Optimization Algorithms on Convolutional Neural Networks|url=https://ieeexplore.ieee.org/abstract/document/8769211?casa_token=uCDWT8_i8jYAAAAA:NvEszNEoUIDWY3mCHS7sytYs71el11x7wgAPH_t26BQqUZLY-mASNZvCwOAcYz6UBhxFBZvEITY|journal=2018 International Conference on Computational Techniques, Electronics and Mechanical Systems (CTEMS)|pages=92–99|doi=10.1109/CTEMS.2018.8769211|isbn=978-1-5386-7709-4|s2cid=198931032}}</ref>
TensorFlow offers a set of optimizers for training neural networks, including [[Adam (optimization algorithm)|ADAM]], [[Adagrad|ADAGRAD]], and [[Stochastic gradient descent|Stochastic Gradient Descent]] (SGD).<ref name=":11">{{Cite web|title=Module: tf.optimizers {{!}} TensorFlow Core v2.7.0|url=https://www.tensorflow.org/api_docs/python/tf/optimizers|access-date=2021-11-06|website=TensorFlow|language=en|archive-date=October 30, 2021|archive-url=https://web.archive.org/web/20211030152658/https://www.tensorflow.org/api_docs/python/tf/optimizers|url-status=live}}</ref> When training a model, different optimizers offer different modes of parameter tuning, often affecting a model's convergence and performance.<ref>{{Cite book|last1=Dogo|first1=E. M.|last2=Afolabi|first2=O. J.|last3=Nwulu|first3=N. I.|last4=Twala|first4=B.|last5=Aigbavboa|first5=C. O.|title=2018 International Conference on Computational Techniques, Electronics and Mechanical Systems (CTEMS)|chapter=A Comparative Analysis of Gradient Descent-Based Optimization Algorithms on Convolutional Neural Networks|date=December 2018|chapter-url=https://ieeexplore.ieee.org/document/8769211|pages=92–99|doi=10.1109/CTEMS.2018.8769211|isbn=978-1-5386-7709-4|s2cid=198931032|access-date=July 25, 2023|archive-date=May 26, 2024|archive-url=https://web.archive.org/web/20240526120806/https://ieeexplore.ieee.org/document/8769211|url-status=live}}</ref>


== Usage and extensions ==
== Usage and extensions ==
=== TensorFlow ===
=== TensorFlow ===
TensorFlow serves as the core platform and library for machine learning. TensorFlow’s APIs use [[Keras]] to allow users to make their own machine learning models.<ref>{{Cite web|title=TensorFlow Core {{!}} Machine Learning for Beginners and Experts|url=https://www.tensorflow.org/overview|access-date=2021-11-04|website=TensorFlow|language=en}}</ref> In addition to building and training their model, TensorFlow can also help load the data to train the model, and deploy it using TensorFlow Serving.<ref name=":1">{{Cite web|title=Introduction to TensorFlow|url=https://www.tensorflow.org/learn|access-date=2021-10-28|website=TensorFlow|language=en}}</ref>
TensorFlow serves as a core platform and library for machine learning. TensorFlow's APIs use [[Keras]] to allow users to make their own machine-learning models.<ref>{{Cite web|title=TensorFlow Core {{!}} Machine Learning for Beginners and Experts|url=https://www.tensorflow.org/overview|access-date=2021-11-04|website=TensorFlow|language=en|archive-date=January 20, 2023|archive-url=https://web.archive.org/web/20230120082541/https://www.tensorflow.org/overview|url-status=live}}</ref> In addition to building and training their model, TensorFlow can also help load the data to train the model, and deploy it using TensorFlow Serving.<ref name=":1">{{Cite web|title=Introduction to TensorFlow|url=https://www.tensorflow.org/learn|access-date=2021-10-28|website=TensorFlow|language=en|archive-date=January 20, 2023|archive-url=https://web.archive.org/web/20230120082541/https://www.tensorflow.org/learn|url-status=live}}</ref>


TensorFlow provides a stable [[Python (programming language)|Python]] API,<ref>{{Cite web|title=All symbols in TensorFlow 2 {{!}} TensorFlow Core v2.7.0|url=https://www.tensorflow.org/api_docs/python/tf/all_symbols|access-date=2021-11-06|website=TensorFlow|language=en}}</ref> as well as APIs without backwards compatibility guarantee for [[JavaScript|Javascript]],<ref>{{Cite web|title=TensorFlow.js|url=https://js.tensorflow.org/|access-date=2021-11-06|website=js.tensorflow.org}}</ref> [[C++]],<ref>{{Cite web|title=TensorFlow C++ API Reference {{!}} TensorFlow Core v2.7.0|url=https://www.tensorflow.org/api_docs/cc|access-date=2021-11-06|website=TensorFlow|language=en}}</ref> and [[Java (programming language)|Java]].<ref>{{Cite web|title=org.tensorflow {{!}} Java|url=https://www.tensorflow.org/api_docs/java/org/tensorflow/package-summary|access-date=2021-11-06|website=TensorFlow|language=en}}</ref><ref name=":13" /> Third-party language binding packages are also available for [[C Sharp (programming language)|C#]],<ref>{{cite web|last=Icaza|first=Miguel de|date=2018-02-17|title=TensorFlowSharp: TensorFlow API for .NET languages|website=[[GitHub]]|url=https://github.com/migueldeicaza/TensorFlowSharp|access-date=2018-02-18}}</ref><ref>{{cite web|last=Chen|first=Haiping|date=2018-12-11|title=TensorFlow.NET: .NET Standard bindings for TensorFlow|website=[[GitHub]]|url=https://github.com/SciSharp/TensorFlow.NET|access-date=2018-12-11}}</ref> [[Haskell (programming language)|Haskell]],<ref>{{cite web|date=2018-02-17|title=haskell: Haskell bindings for TensorFlow|url=https://github.com/tensorflow/haskell|access-date=2018-02-18|publisher=tensorflow}}</ref> [[Julia (programming language)|Julia]],<ref>{{cite web|last=Malmaud|first=Jon|date=2019-08-12|title=A Julia wrapper for TensorFlow|website=[[GitHub]]|url=https://github.com/malmaud/TensorFlow.jl|access-date=2019-08-14|quote=operations like sin, * (matrix multiplication), .* (element-wise multiplication), etc [..]. Compare to Python, which requires learning specialized namespaced functions like tf.matmul.}}</ref> [[Matlab|MATLAB]],<ref>{{cite web|date=2019-11-03|title=A MATLAB wrapper for TensorFlow Core|website=[[GitHub]]|url=https://github.com/asteinh/tensorflow.m|access-date=2020-02-13}}</ref> [[R (software)|R]],<ref>{{cite web|date=2018-02-17|title=tensorflow: TensorFlow for R|url=https://github.com/rstudio/tensorflow|access-date=2018-02-18|publisher=RStudio}}</ref> [[Scala (programming language)|Scala]],<ref>{{cite web|last=Platanios|first=Anthony|date=2018-02-17|title=tensorflow_scala: TensorFlow API for the Scala Programming Language|website=[[GitHub]]|url=https://github.com/eaplatanios/tensorflow_scala|access-date=2018-02-18}}</ref> [[Rust (programming language)|Rust]],<ref>{{cite web|date=2018-02-17|title=rust: Rust language bindings for TensorFlow|url=https://github.com/tensorflow/rust|access-date=2018-02-18|publisher=tensorflow}}</ref> [[OCaml]],<ref>{{cite web|last=Mazare|first=Laurent|date=2018-02-16|title=tensorflow-ocaml: OCaml bindings for TensorFlow|website=[[GitHub]]|url=https://github.com/LaurentMazare/tensorflow-ocaml|access-date=2018-02-18}}</ref> and [[Crystal (programming language)|Crystal]].<ref>{{cite web|title=fazibear/tensorflow.cr|url=https://github.com/fazibear/tensorflow.cr|access-date=2018-10-10|website=GitHub|language=en}}</ref> Bindings that are now archived and unsupported include [[Go (programming language)|Go]]<ref>{{Cite web|title=tensorflow package - github.com/tensorflow/tensorflow/tensorflow/go - pkg.go.dev|url=https://pkg.go.dev/github.com/tensorflow/tensorflow/tensorflow/go|access-date=2021-11-06|website=pkg.go.dev}}</ref> and [[Swift (programming language)|Swift]].<ref>{{Cite web|title=Swift for TensorFlow (In Archive Mode)|url=https://www.tensorflow.org/swift/guide/overview|access-date=2021-11-06|website=TensorFlow|language=en}}</ref>
TensorFlow provides a stable [[Python (programming language)|Python]] [[API|Application Program Interface]] ([[API]]),<ref>{{Cite web|title=All symbols in TensorFlow 2 {{!}} TensorFlow Core v2.7.0|url=https://www.tensorflow.org/api_docs/python/tf/all_symbols|access-date=2021-11-06|website=TensorFlow|language=en|archive-date=November 6, 2021|archive-url=https://web.archive.org/web/20211106055527/https://www.tensorflow.org/api_docs/python/tf/all_symbols|url-status=live}}</ref> as well as APIs without backwards compatibility guarantee for [[JavaScript|Javascript]],<ref>{{Cite web|title=TensorFlow.js|url=https://js.tensorflow.org/|access-date=2021-11-06|website=js.tensorflow.org|archive-date=May 26, 2024|archive-url=https://web.archive.org/web/20240526120808/https://www.tensorflow.org/js|url-status=live}}</ref> [[C++]],<ref>{{Cite web|title=TensorFlow C++ API Reference {{!}} TensorFlow Core v2.7.0|url=https://www.tensorflow.org/api_docs/cc|access-date=2021-11-06|website=TensorFlow|language=en|archive-date=January 20, 2023|archive-url=https://web.archive.org/web/20230120082630/https://www.tensorflow.org/api_docs/cc|url-status=live}}</ref> and [[Java (programming language)|Java]].<ref>{{Cite web|title=org.tensorflow {{!}} Java|url=https://www.tensorflow.org/api_docs/java/org/tensorflow/package-summary|access-date=2021-11-06|website=TensorFlow|language=en|archive-date=November 6, 2021|archive-url=https://web.archive.org/web/20211106054023/https://www.tensorflow.org/api_docs/java/org/tensorflow/package-summary|url-status=live}}</ref><ref name=":13" /> Third-party language binding packages are also available for [[C Sharp (programming language)|C#]],<ref>{{cite web|last=Icaza|first=Miguel de|date=2018-02-17|title=TensorFlowSharp: TensorFlow API for .NET languages|website=[[GitHub]]|url=https://github.com/migueldeicaza/TensorFlowSharp|access-date=2018-02-18|archive-date=July 24, 2017|archive-url=https://web.archive.org/web/20170724080201/https://github.com/migueldeicaza/TensorFlowSharp|url-status=live}}</ref><ref>{{cite web|last=Chen|first=Haiping|date=2018-12-11|title=TensorFlow.NET: .NET Standard bindings for TensorFlow|website=[[GitHub]]|url=https://github.com/SciSharp/TensorFlow.NET|access-date=2018-12-11|archive-date=July 12, 2019|archive-url=https://web.archive.org/web/20190712123610/https://github.com/SciSharp/TensorFlow.NET|url-status=live}}</ref> [[Haskell (programming language)|Haskell]],<ref>{{cite web|date=2018-02-17|title=haskell: Haskell bindings for TensorFlow|url=https://github.com/tensorflow/haskell|access-date=2018-02-18|publisher=tensorflow|archive-date=July 24, 2017|archive-url=https://web.archive.org/web/20170724080229/https://github.com/tensorflow/haskell|url-status=live}}</ref> [[Julia (programming language)|Julia]],<ref>{{cite web|last=Malmaud|first=Jon|date=2019-08-12|title=A Julia wrapper for TensorFlow|website=[[GitHub]]|url=https://github.com/malmaud/TensorFlow.jl|access-date=2019-08-14|quote=operations like sin, * (matrix multiplication), .* (element-wise multiplication), etc [..]. Compare to Python, which requires learning specialized namespaced functions like tf.matmul.|archive-date=July 24, 2017|archive-url=https://web.archive.org/web/20170724080234/https://github.com/malmaud/TensorFlow.jl|url-status=live}}</ref> [[Matlab|MATLAB]],<ref>{{cite web|date=2019-11-03|title=A MATLAB wrapper for TensorFlow Core|website=[[GitHub]]|url=https://github.com/asteinh/tensorflow.m|access-date=2020-02-13|archive-date=September 14, 2020|archive-url=https://web.archive.org/web/20200914161638/https://github.com/asteinh/tensorflow.m|url-status=live}}</ref> [[Object Pascal]],<ref>{{cite web|date=2023-01-19|title=Use TensorFlow from Pascal (FreePascal, Lazarus, etc.)|website=[[GitHub]]|url=https://github.com/zsoltszakaly/tensorflowforpascal|access-date=2023-01-20|archive-date=January 20, 2023|archive-url=https://web.archive.org/web/20230120083754/https://github.com/zsoltszakaly/tensorflowforpascal|url-status=live}}</ref> [[R (software)|R]],<ref>{{cite web|date=2018-02-17|title=tensorflow: TensorFlow for R|url=https://github.com/rstudio/tensorflow|access-date=2018-02-18|publisher=RStudio|archive-date=January 4, 2017|archive-url=https://web.archive.org/web/20170104081359/https://github.com/rstudio/tensorflow|url-status=live}}</ref> [[Scala (programming language)|Scala]],<ref>{{cite web|last=Platanios|first=Anthony|date=2018-02-17|title=tensorflow_scala: TensorFlow API for the Scala Programming Language|website=[[GitHub]]|url=https://github.com/eaplatanios/tensorflow_scala|access-date=2018-02-18|archive-date=February 18, 2019|archive-url=https://web.archive.org/web/20190218035307/https://github.com/eaplatanios/tensorflow_scala|url-status=live}}</ref> [[Rust (programming language)|Rust]],<ref>{{cite web|date=2018-02-17|title=rust: Rust language bindings for TensorFlow|url=https://github.com/tensorflow/rust|access-date=2018-02-18|publisher=tensorflow|archive-date=July 24, 2017|archive-url=https://web.archive.org/web/20170724080245/https://github.com/tensorflow/rust|url-status=live}}</ref> [[OCaml]],<ref>{{cite web|last=Mazare|first=Laurent|date=2018-02-16|title=tensorflow-ocaml: OCaml bindings for TensorFlow|website=[[GitHub]]|url=https://github.com/LaurentMazare/tensorflow-ocaml|access-date=2018-02-18|archive-date=June 11, 2018|archive-url=https://web.archive.org/web/20180611155059/https://github.com/LaurentMazare/tensorflow-ocaml|url-status=live}}</ref> and [[Crystal (programming language)|Crystal]].<ref>{{cite web|title=fazibear/tensorflow.cr|url=https://github.com/fazibear/tensorflow.cr|access-date=2018-10-10|website=GitHub|language=en|archive-date=June 27, 2018|archive-url=https://web.archive.org/web/20180627120743/https://github.com/fazibear/tensorflow.cr|url-status=live}}</ref> Bindings that are now archived and unsupported include [[Go (programming language)|Go]]<ref>{{Cite web|title=tensorflow package - github.com/tensorflow/tensorflow/tensorflow/go - pkg.go.dev|url=https://pkg.go.dev/github.com/tensorflow/tensorflow/tensorflow/go|access-date=2021-11-06|website=pkg.go.dev|archive-date=November 6, 2021|archive-url=https://web.archive.org/web/20211106054028/https://pkg.go.dev/github.com/tensorflow/tensorflow/tensorflow/go|url-status=live}}</ref> and [[Swift (programming language)|Swift]].<ref>{{Cite web|title=Swift for TensorFlow (In Archive Mode)|url=https://www.tensorflow.org/swift/guide/overview|access-date=2021-11-06|website=TensorFlow|language=en|archive-date=November 6, 2021|archive-url=https://web.archive.org/web/20211106054024/https://www.tensorflow.org/swift/guide/overview|url-status=live}}</ref>


=== TensorFlow.js ===
=== TensorFlow.js ===
TensorFlow also has a library for machine learning in JavaScript. Using the provided JavaScript APIs, TensorFlow.js allows users to use either Tensorflow.js models or converted models from TensorFlow or TFLite, retrain the given models, and run on the web.<ref name=":1" /><ref>{{Cite web|title=TensorFlow.js {{!}} Machine Learning for JavaScript Developers|url=https://www.tensorflow.org/js|access-date=2021-10-28|website=TensorFlow|language=en}}</ref>
TensorFlow also has a library for machine learning in JavaScript. Using the provided [[JavaScript]] APIs, TensorFlow.js allows users to use either Tensorflow.js models or converted models from TensorFlow or TFLite, retrain the given models, and run on the web.<ref name=":1" /><ref>{{Cite web|title=TensorFlow.js {{!}} Machine Learning for JavaScript Developers|url=https://www.tensorflow.org/js|access-date=2021-10-28|website=TensorFlow|language=en|archive-date=November 4, 2021|archive-url=https://web.archive.org/web/20211104081918/https://www.tensorflow.org/js/|url-status=live}}</ref>


=== TFLite ===
=== TFLite ===
TensorFlow Lite has APIs for mobile apps or embedded devices to generate and deploy TensorFlow models.<ref>{{Cite web|title=TensorFlow Lite {{!}} ML for Mobile and Edge Devices|url=https://www.tensorflow.org/lite|access-date=2021-11-01|website=TensorFlow|language=en}}</ref> These models are compressed and optimized in order to be more efficient and have a higher performance on smaller capacity devices.<ref name=":14">{{Cite web|title=TensorFlow Lite|url=https://www.tensorflow.org/lite/guide|access-date=2021-11-01|website=TensorFlow|language=en}}</ref>
TensorFlow Lite has APIs for mobile apps or embedded devices to generate and deploy TensorFlow models.<ref>{{Cite web|title=TensorFlow Lite {{!}} ML for Mobile and Edge Devices|url=https://www.tensorflow.org/lite|access-date=2021-11-01|website=TensorFlow|language=en|archive-date=November 4, 2021|archive-url=https://web.archive.org/web/20211104011324/https://www.tensorflow.org/lite|url-status=live}}</ref> These models are compressed and optimized in order to be more efficient and have a higher performance on smaller capacity devices.<ref name=":14">{{Cite web|title=TensorFlow Lite|url=https://www.tensorflow.org/lite/guide|access-date=2021-11-01|website=TensorFlow|language=en|archive-date=November 2, 2021|archive-url=https://web.archive.org/web/20211102150551/https://www.tensorflow.org/lite/guide|url-status=live}}</ref>


TensorFlow Lite uses [[FlatBuffers]] as the data serialization format for network models, eschewing the [[Protocol Buffers]] format used by standard TensorFlow models.<ref name=":14" />
TensorFlow Lite uses [[FlatBuffers]] as the data serialization format for network models, eschewing the [[Protocol Buffers]] format used by standard TensorFlow models.<ref name=":14" />


=== TFX ===
=== TFX ===
TensorFlow Extended (abbrev. TFX) provides numerous components to perform all the operations needed for end-to-end production.<ref name=":2">{{Cite web|title=TensorFlow Extended (TFX) {{!}} ML Production Pipelines|url=https://www.tensorflow.org/tfx|access-date=2021-11-02|website=TensorFlow|language=en}}</ref> Components include loading, validating, and transforming data, tuning, training, and evaluating the machine learning model, and pushing the model itself into production.<ref name=":1" /><ref name=":2" />
TensorFlow Extended (abbrev. TFX) provides numerous components to perform all the operations needed for end-to-end production.<ref name=":2">{{Cite web|title=TensorFlow Extended (TFX) {{!}} ML Production Pipelines|url=https://www.tensorflow.org/tfx|access-date=2021-11-02|website=TensorFlow|language=en|archive-date=November 4, 2021|archive-url=https://web.archive.org/web/20211104005652/https://www.tensorflow.org/tfx|url-status=live}}</ref> Components include loading, validating, and transforming data, tuning, training, and evaluating the machine learning model, and pushing the model itself into production.<ref name=":1" /><ref name=":2" />


=== Integrations ===
=== Integrations ===


==== Numpy ====
==== Numpy ====
Numpy is one of the most popular Python data libraries, and TensorFlow offers integration and compatibility with its data structures.<ref name=":15">{{Cite web|title=Customization basics: tensors and operations {{!}} TensorFlow Core|url=https://www.tensorflow.org/tutorials/customization/basics|access-date=2021-11-06|website=TensorFlow|language=en}}</ref> Numpy NDarrays, the library’s native datatype, are automatically converted to TensorFlow Tensors in TF operations; the same is also true vice-versa.<ref name=":15" /> This allows for the two libraries to work in unison without requiring the user to write explicit data conversions. Moreover, the integration extends to memory optimization by having TF Tensors share the underlying memory representations of Numpy NDarrays whenever possible.<ref name=":15" />
Numpy is one of the most popular [[Python (programming language)|Python]] data libraries, and TensorFlow offers integration and compatibility with its data structures.<ref name=":15">{{Cite web|title=Customization basics: tensors and operations {{!}} TensorFlow Core|url=https://www.tensorflow.org/tutorials/customization/basics|access-date=2021-11-06|website=TensorFlow|language=en|archive-date=November 6, 2021|archive-url=https://web.archive.org/web/20211106055823/https://www.tensorflow.org/tutorials/customization/basics|url-status=live}}</ref> Numpy NDarrays, the library's native datatype, are automatically converted to TensorFlow Tensors in TF operations; the same is also true vice versa.<ref name=":15" /> This allows for the two libraries to work in unison without requiring the user to write explicit data conversions. Moreover, the integration extends to memory optimization by having TF Tensors share the underlying memory representations of Numpy NDarrays whenever possible.<ref name=":15" />


=== Extensions ===
=== Extensions ===
TensorFlow also offers a variety of libraries and extensions to advance and extend the models and methods used.<ref name=":33">{{Cite web|title=Guide {{!}} TensorFlow Core|url=https://www.tensorflow.org/guide|access-date=2021-11-04|website=TensorFlow|language=en}}</ref> For example, TensorFlow Recommenders and TensorFlow Graphics are libraries for their respective functionalities in recommendation systems and graphics, TensorFlow Federated provides a framework for decentralized data, and TensorFlow Cloud allows users to directly interact with Google Cloud to integrate their local code to Google Cloud.<ref name=":43">{{Cite web|title=Libraries & extensions|url=https://www.tensorflow.org/resources/libraries-extensions|access-date=2021-11-04|website=TensorFlow|language=en}}</ref> Other add-ons, libraries, and frameworks include TensorFlow Model Optimization, TensorFlow Probability, TensorFlow Quantum, and TensorFlow Decision Forests.<ref name=":33" /><ref name=":43" />
TensorFlow also offers a variety of [[Library (computing)|libraries]] and [[Plug-in (computing)|extensions]] to advance and extend the models and methods used.<ref name=":33">{{Cite web|title=Guide {{!}} TensorFlow Core|url=https://www.tensorflow.org/guide|access-date=2021-11-04|website=TensorFlow|language=en|archive-date=July 17, 2019|archive-url=https://web.archive.org/web/20190717021617/https://www.tensorflow.org/guide|url-status=live}}</ref> For example, TensorFlow Recommenders and TensorFlow Graphics are [[Library (computing)|libraries]] for their respective functionalities in recommendation systems and graphics, TensorFlow Federated provides a framework for decentralized data, and TensorFlow Cloud allows users to directly interact with Google Cloud to integrate their local code to Google Cloud.<ref name=":43">{{Cite web|title=Libraries & extensions|url=https://www.tensorflow.org/resources/libraries-extensions|access-date=2021-11-04|website=TensorFlow|language=en|archive-date=November 4, 2021|archive-url=https://web.archive.org/web/20211104012048/https://www.tensorflow.org/resources/libraries-extensions|url-status=live}}</ref> Other add-ons, [[Library (computing)|libraries]], and [[Software framework|frameworks]] include TensorFlow Model Optimization, TensorFlow Probability, TensorFlow Quantum, and TensorFlow Decision Forests.<ref name=":33" /><ref name=":43" />


==== Google Colab ====
==== Google Colab ====
Google also released Colaboratory, a TensorFlow Jupyter notebook environment that does not require any setup.<ref>{{cite web|title=Colaboratory – Google|url=https://research.google.com/colaboratory/faq.html|access-date=2018-11-10|website=research.google.com|language=en}}</ref> It runs on Google Cloud and allows users free access to GPUs and the ability to store and share notebooks on Google Drive.<ref>{{Cite web|title=Google Colaboratory|url=https://colab.research.google.com/|access-date=2021-11-06|website=colab.research.google.com|language=en}}</ref>
Google also released Colaboratory, a TensorFlow Jupyter notebook environment that does not require any setup.<ref>{{cite web|title=Colaboratory – Google|url=https://research.google.com/colaboratory/faq.html|access-date=2018-11-10|website=research.google.com|language=en|archive-date=October 24, 2017|archive-url=https://web.archive.org/web/20171024191457/https://research.google.com/colaboratory/faq.html|url-status=live}}</ref> It runs on Google Cloud and allows users free access to GPUs and the ability to store and share notebooks on [[Google Drive]].<ref>{{Cite web|title=Google Colaboratory|url=https://colab.research.google.com/|access-date=2021-11-06|website=colab.research.google.com|language=en|archive-date=February 3, 2021|archive-url=https://web.archive.org/web/20210203141626/https://colab.research.google.com/|url-status=live}}</ref>


==== Google JAX ====
==== Google JAX ====
{{main|Google JAX}}
{{main|Google JAX}}
[[Google JAX]] is a machine learning framework for transforming numerical functions.<ref name=":jax">{{Citation |title=JAX: Autograd and XLA |date=2022-06-18 |url=https://github.com/google/jax |archive-url=https://web.archive.org/web/20220618205214/https://github.com/google/jax |publisher=Google |access-date=2022-06-18 |archive-date=2022-06-18}}</ref><ref>{{Cite web |title=Using JAX to accelerate our research |url=https://www.deepmind.com/blog/using-jax-to-accelerate-our-research |url-status=live |archive-url=https://web.archive.org/web/20220618205746/https://www.deepmind.com/blog/using-jax-to-accelerate-our-research |archive-date=2022-06-18 |access-date=2022-06-18 |website=www.deepmind.com |language=en}}</ref><ref>{{Cite web |date=2022-04-25 |title=Why is Google’s JAX so popular? |url=https://analyticsindiamag.com/why-is-googles-jax-so-popular/ |url-status=live |archive-url=https://web.archive.org/web/20220618210503/https://analyticsindiamag.com/why-is-googles-jax-so-popular/ |archive-date=2022-06-18 |access-date=2022-06-18 |website=Analytics India Magazine |language=en-US}}</ref> It is described as bringing together a modified version of [https://github.com/HIPS/autograd autograd] (automatic obtaining of the gradient function through differentiation of a function) and [[TensorFlow]]'s [https://www.tensorflow.org/xla XLA] (Accelerated Linear Algebra). It is designed to follow the structure and workflow of [[NumPy]] as closely as possible and works with [[TensorFlow]] as well as other frameworks such as [[PyTorch]]. The primary functions of JAX are:<ref name=":jax" />
[[Google JAX]] is a machine learning [[Software framework|framework]] for transforming numerical functions.<ref name=":jax">{{Citation |title=JAX: Autograd and XLA |date=2022-06-18 |url=https://github.com/google/jax |archive-url=https://web.archive.org/web/20220618205214/https://github.com/google/jax |publisher=Google |bibcode=2021ascl.soft11002B |access-date=2022-06-18 |archive-date=2022-06-18|last1=Bradbury |first1=James |last2=Frostig |first2=Roy |last3=Hawkins |first3=Peter |last4=Johnson |first4=Matthew James |last5=Leary |first5=Chris |last6=MacLaurin |first6=Dougal |last7=Necula |first7=George |last8=Paszke |first8=Adam |last9=Vanderplas |first9=Jake |last10=Wanderman-Milne |first10=Skye |last11=Zhang |first11=Qiao |journal=Astrophysics Source Code Library }}</ref><ref>{{Cite web |title=Using JAX to accelerate our research |url=https://www.deepmind.com/blog/using-jax-to-accelerate-our-research |url-status=live |archive-url=https://web.archive.org/web/20220618205746/https://www.deepmind.com/blog/using-jax-to-accelerate-our-research |archive-date=2022-06-18 |access-date=2022-06-18 |website=www.deepmind.com |language=en}}</ref><ref>{{Cite web |date=2022-04-25 |title=Why is Google's JAX so popular? |url=https://analyticsindiamag.com/why-is-googles-jax-so-popular/ |url-status=live |archive-url=https://web.archive.org/web/20220618210503/https://analyticsindiamag.com/why-is-googles-jax-so-popular/ |archive-date=2022-06-18 |access-date=2022-06-18 |website=Analytics India Magazine |language=en-US}}</ref> It is described as bringing together a modified version of [https://github.com/HIPS/autograd autograd] (automatic obtaining of the gradient function through differentiation of a function) and TensorFlow's [https://www.tensorflow.org/xla XLA] (Accelerated Linear Algebra). It is designed to follow the structure and workflow of [[NumPy]] as closely as possible and works with TensorFlow as well as other frameworks such as [[PyTorch]]. The primary functions of JAX are:<ref name=":jax" />


# grad: automatic differentiation
# grad: automatic differentiation
Line 128: Line 123:


=== Medical ===
=== Medical ===
[[GE Healthcare]] used TensorFlow to increase the speed and accuracy of [[Magnetic resonance imaging|MRIs]] in identifying specific body parts.<ref>{{Cite web|title=Intelligent Scanning Using Deep Learning for MRI|url=https://blog.tensorflow.org/2019/03/intelligent-scanning-using-deep-learning.html|access-date=2021-11-04|language=en}}</ref> Google used TensorFlow to create DermAssist, a free mobile application that allows users to take pictures of their skin and identify potential health complications.<ref name=":6">{{Cite web|title=Case Studies and Mentions|url=https://www.tensorflow.org/about/case-studies|access-date=2021-11-04|website=TensorFlow|language=en}}</ref> Sinovation Ventures used TensorFlow to identify and classify eye diseases from [[optical coherence tomography]] (OCT) scans.<ref name=":6" />
[[GE Healthcare]] used TensorFlow to increase the speed and accuracy of [[Magnetic resonance imaging|MRIs]] in identifying specific body parts.<ref>{{Cite web|title=Intelligent Scanning Using Deep Learning for MRI|url=https://blog.tensorflow.org/2019/03/intelligent-scanning-using-deep-learning.html|access-date=2021-11-04|language=en|archive-date=November 4, 2021|archive-url=https://web.archive.org/web/20211104183851/https://blog.tensorflow.org/2019/03/intelligent-scanning-using-deep-learning.html|url-status=live}}</ref> Google used TensorFlow to create DermAssist, a free mobile application that allows users to take pictures of their skin and identify potential health complications.<ref name=":6">{{Cite web|title=Case Studies and Mentions|url=https://www.tensorflow.org/about/case-studies|access-date=2021-11-04|website=TensorFlow|language=en|archive-date=October 26, 2021|archive-url=https://web.archive.org/web/20211026011835/https://www.tensorflow.org/about/case-studies|url-status=live}}</ref> [[Sinovation Ventures]] used TensorFlow to identify and classify eye diseases from [[optical coherence tomography]] (OCT) scans.<ref name=":6" />


=== Social media ===
=== Social media ===
[[Twitter]] implemented TensorFlow to rank tweets by importance for a given user, and changed their platform to show tweets in order of this ranking.<ref name=":7">{{Cite web|title=Ranking Tweets with TensorFlow|url=https://blog.tensorflow.org/2019/03/ranking-tweets-with-tensorflow.html|access-date=2021-11-04|language=en}}</ref> Previously, tweets were simply shown in reverse chronological order.<ref name=":7" /> The photo sharing app [[VSCO]] used TensorFlow to help suggest custom filters for photos.<ref name=":6" />
[[Twitter]] implemented TensorFlow to rank tweets by importance for a given user, and changed their platform to show tweets in order of this ranking.<ref name=":7">{{Cite web|title=Ranking Tweets with TensorFlow|url=https://blog.tensorflow.org/2019/03/ranking-tweets-with-tensorflow.html|access-date=2021-11-04|language=en|archive-date=November 4, 2021|archive-url=https://web.archive.org/web/20211104005536/https://blog.tensorflow.org/2019/03/ranking-tweets-with-tensorflow.html|url-status=live}}</ref> Previously, tweets were simply shown in reverse chronological order.<ref name=":7" /> The photo sharing app [[VSCO]] used TensorFlow to help suggest custom filters for photos.<ref name=":6" />


=== Search Engine ===
=== Search Engine ===
[[Google]] officially released [[RankBrain]] on October 26, 2015, backed by TensorFlow.<ref>{{Cite web|last1=3.5kshares|last2=72kreads|title=A Complete Guide to the Google RankBrain Algorithm|url=https://www.searchenginejournal.com/google-algorithm-history/rankbrain/|access-date=2021-11-06|website=Search Engine Journal|language=en}}</ref>
[[Google]] officially released [[RankBrain]] on October 26, 2015, backed by TensorFlow.<ref>{{Cite web|last1=3.5kshares|last2=72kreads|title=A Complete Guide to the Google RankBrain Algorithm|url=https://www.searchenginejournal.com/google-algorithm-history/rankbrain/|access-date=2021-11-06|website=Search Engine Journal|date=September 2, 2020|language=en|archive-date=November 6, 2021|archive-url=https://web.archive.org/web/20211106062307/https://www.searchenginejournal.com/google-algorithm-history/rankbrain/|url-status=live}}</ref>


=== Education ===
=== Education ===
InSpace, a virtual learning platform, used TensorFlow to filter out toxic chat messages in classrooms.<ref>{{Cite web|title=InSpace: A new video conferencing platform that uses TensorFlow.js for toxicity filters in chat|url=https://blog.tensorflow.org/2020/12/inspace-new-video-conferencing-platform-uses-tensorflowjs-for-toxicity-filters-in-chat.html|access-date=2021-11-04|language=en}}</ref> Liulishuo, an online English learning platform, utilized TensorFlow to create an adaptive curriculum for each student.<ref name=":8">{{Cite web|last=Xulin|title=流利说基于 TensorFlow 的自适应系统实践|url=http://mp.weixin.qq.com/s?__biz=MzI0NjIzNDkwOA==&mid=2247484035&idx=1&sn=85fa0decac95e359435f68c50865ac0b&chksm=e94328f0de34a1e665e0d809b938efb34f0aa6034391891246fc223b7782ac3bfd6ddd588aa2#rd|access-date=2021-11-04|website=Weixin Official Accounts Platform}}</ref> TensorFlow was used to accurately assess a student’s current abilities, and also helped decide the best future content to show based on those capabilities.<ref name=":8" />
InSpace, a virtual learning platform, used TensorFlow to filter out toxic chat messages in classrooms.<ref>{{Cite web|title=InSpace: A new video conferencing platform that uses TensorFlow.js for toxicity filters in chat|url=https://blog.tensorflow.org/2020/12/inspace-new-video-conferencing-platform-uses-tensorflowjs-for-toxicity-filters-in-chat.html|access-date=2021-11-04|language=en|archive-date=November 4, 2021|archive-url=https://web.archive.org/web/20211104005535/https://blog.tensorflow.org/2020/12/inspace-new-video-conferencing-platform-uses-tensorflowjs-for-toxicity-filters-in-chat.html|url-status=live}}</ref> Liulishuo, an online English learning platform, utilized TensorFlow to create an adaptive curriculum for each student.<ref name=":8">{{Cite web|last=Xulin|title=流利说基于 TensorFlow 的自适应系统实践|url=http://mp.weixin.qq.com/s?__biz=MzI0NjIzNDkwOA==&mid=2247484035&idx=1&sn=85fa0decac95e359435f68c50865ac0b&chksm=e94328f0de34a1e665e0d809b938efb34f0aa6034391891246fc223b7782ac3bfd6ddd588aa2#rd|access-date=2021-11-04|website=Weixin Official Accounts Platform|archive-date=November 6, 2021|archive-url=https://web.archive.org/web/20211106224313/https://mp.weixin.qq.com/s?__biz=MzI0NjIzNDkwOA==&mid=2247484035&idx=1&sn=85fa0decac95e359435f68c50865ac0b&chksm=e94328f0de34a1e665e0d809b938efb34f0aa6034391891246fc223b7782ac3bfd6ddd588aa2#rd|url-status=live}}</ref> TensorFlow was used to accurately assess a student's current abilities, and also helped decide the best future content to show based on those capabilities.<ref name=":8" />


=== Retail ===
=== Retail ===
The e-commerce platform [[Carousell (company)|Carousell]] used TensorFlow to provide personalized recommendations for customers.<ref name=":6" /> The cosmetics company ModiFace used TensorFlow to create an augmented reality experience for customers to test various shades of make-up on their face.<ref>{{Cite web|title=How Modiface utilized TensorFlow.js in production for AR makeup try on in the browser|url=https://blog.tensorflow.org/2020/02/how-modiface-utilized-tensorflowjs-in-ar-makeup-in-browser.html|access-date=2021-11-04|language=en}}</ref>
The e-commerce platform [[Carousell (company)|Carousell]] used TensorFlow to provide personalized recommendations for customers.<ref name=":6" /> The cosmetics company ModiFace used TensorFlow to create an augmented reality experience for customers to test various shades of make-up on their face.<ref>{{Cite web|title=How Modiface utilized TensorFlow.js in production for AR makeup try on in the browser|url=https://blog.tensorflow.org/2020/02/how-modiface-utilized-tensorflowjs-in-ar-makeup-in-browser.html|access-date=2021-11-04|language=en|archive-date=November 4, 2021|archive-url=https://web.archive.org/web/20211104005535/https://blog.tensorflow.org/2020/02/how-modiface-utilized-tensorflowjs-in-ar-makeup-in-browser.html|url-status=live}}</ref>


{{multiple image
{{multiple image
Line 150: Line 145:


=== Research ===
=== Research ===
TensorFlow is the foundation for the automated [[image captioning|image-captioning]] software [[DeepDream]].<ref name="Byrne">{{cite web |last1 = Byrne |first1 = Michael |title = Google Offers Up Its Entire Machine Learning Library as Open-Source Software |url = https://www.vice.com/en/article/8q8avx/google-offers-up-its-entire-machine-learning-library-as-open-source |website = Vice |access-date = November 11, 2015 |date = November 11, 2015 }}</ref>
TensorFlow is the foundation for the automated [[image captioning|image-captioning]] software [[DeepDream]].<ref name="Byrne">{{cite web |last1 = Byrne |first1 = Michael |title = Google Offers Up Its Entire Machine Learning Library as Open-Source Software |url = https://www.vice.com/en/article/8q8avx/google-offers-up-its-entire-machine-learning-library-as-open-source |website = Vice |access-date = November 11, 2015 |date = November 11, 2015 |archive-date = January 25, 2021 |archive-url = https://web.archive.org/web/20210125121138/https://www.vice.com/en/article/8q8avx/google-offers-up-its-entire-machine-learning-library-as-open-source |url-status = live }}</ref>


{{-}}
{{-}}
Line 161: Line 156:
* [[Keras]]
* [[Keras]]


== Bibliography ==
== References ==

=== General ===
{{Refbegin}}
{{Refbegin}}
*{{Cite book
*{{Cite book
| first1 = Laurence
|first1 = Laurence
| last1 = Moroney
|last1 = Moroney
| date = October 1, 2020
|date = October 1, 2020
| title = AI and Machine Learning for Coders
|title = AI and Machine Learning for Coders
| edition = 1st
|edition = 1st
| publisher = [[O'Reilly Media]]
|publisher = [[O'Reilly Media]]
| page = 365
|page = 365
| isbn = 9781492078197
|isbn = 9781492078197
| url = https://www.oreilly.com/library/view/ai-and-machine/9781492078180/
|url = https://www.oreilly.com/library/view/ai-and-machine/9781492078180/
|access-date = December 21, 2020
|archive-date = June 7, 2021
|archive-url = https://web.archive.org/web/20210607074743/https://www.oreilly.com/library/view/ai-and-machine/9781492078180/
|url-status = live
}}
}}
*{{Cite book
*{{Cite book
| first1 = Aurélien
|first1 = Aurélien
| last1 = Géron
|last1 = Géron
| date = October 15, 2019
|date = October 15, 2019
| title = Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow
|title = Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow
| edition = 2nd
|edition = 2nd
| publisher = [[O'Reilly Media]]
|publisher = [[O'Reilly Media]]
| page = 856
|page = 856
| isbn = 9781492032632
|isbn = 9781492032632
| url = https://www.oreilly.com/library/view/hands-on-machine-learning/9781492032632/
|url = https://www.oreilly.com/library/view/hands-on-machine-learning/9781492032632/
|access-date = November 25, 2019
|archive-date = May 1, 2021
|archive-url = https://web.archive.org/web/20210501010926/https://www.oreilly.com/library/view/hands-on-machine-learning/9781492032632/
|url-status = live
}}
}}
*{{Cite book
*{{Cite book
| first1 = Bharath
|first1 = Bharath
| last1 = Ramsundar
|last1 = Ramsundar
| first2 = Reza Bosagh
|first2 = Reza Bosagh
| last2 = Zadeh
|last2 = Zadeh
| date = March 23, 2018
|date = March 23, 2018
| title = TensorFlow for Deep Learning
|title = TensorFlow for Deep Learning
| edition = 1st
|edition = 1st
| publisher = [[O'Reilly Media]]
|publisher = [[O'Reilly Media]]
| page = 256
|page = 256
| isbn = 9781491980446
|isbn = 9781491980446
| url = https://www.oreilly.com/library/view/tensorflow-for-deep/9781491980446/
|url = https://www.oreilly.com/library/view/tensorflow-for-deep/9781491980446/
|access-date = November 25, 2019
|archive-date = June 7, 2021
|archive-url = https://web.archive.org/web/20210607150529/https://www.oreilly.com/library/view/tensorflow-for-deep/9781491980446/
|url-status = live
}}
}}
*{{Cite book
*{{Cite book
| first1 = Tom
|first1 = Tom
| last1 = Hope
|last1 = Hope
| first2 = Yehezkel S.
|first2 = Yehezkel S.
| last2 = Resheff
|last2 = Resheff
| first3 = Itay
|first3 = Itay
| last3 = Lieder
|last3 = Lieder
| date = August 27, 2017
|date = August 27, 2017
| title = Learning TensorFlow: A Guide to Building Deep Learning Systems
|title = Learning TensorFlow: A Guide to Building Deep Learning Systems
| edition = 1st
|edition = 1st
| publisher = [[O'Reilly Media]]
|publisher = [[O'Reilly Media]]
| page = 242
|page = 242
| isbn = 9781491978504
|isbn = 9781491978504
| url = https://www.oreilly.com/library/view/learning-tensorflow/9781491978504/
|url = https://www.oreilly.com/library/view/learning-tensorflow/9781491978504/
|access-date = November 25, 2019
|archive-date = March 8, 2021
|archive-url = https://web.archive.org/web/20210308153359/https://www.oreilly.com/library/view/learning-tensorflow/9781491978504/
|url-status = live
}}
}}
*{{Cite book
*{{Cite book
Line 223: Line 236:
| isbn = 9781617293870
| isbn = 9781617293870
}}
}}

{{Refend}}
{{Refend}}

=== Citations ===
{{reflist}}


== External links ==
== External links ==
* {{Official website|https://www.tensorflow.org}}
* {{Official website|https://www.tensorflow.org}}
*[https://www.oreilly.com/library/view/learning-tensorflowjs/9781492090786/ Learning TensorFlow.js Book (ENG)]
* [https://www.oreilly.com/library/view/learning-tensorflowjs/9781492090786/ Learning TensorFlow.js Book (ENG)]


{{Google AI}}
{{Deep Learning Software}}
{{Deep learning software}}
{{Differentiable computing}}
{{Google FOSS}}
{{Google FOSS}}
{{Differentiable computing}}

== References ==
{{Reflist}}


[[Category:Applied machine learning]]
[[Category:Deep learning software]]
[[Category:Data mining and machine learning software]]
[[Category:Deep learning]]
[[Category:Free software programmed in C++]]
[[Category:Free software programmed in C++]]
[[Category:Free software programmed in Python]]
[[Category:Free software programmed in Python]]

Latest revision as of 12:09, 26 May 2024

TensorFlow
Developer(s)Google Brain Team[1]
Initial releaseNovember 9, 2015; 8 years ago (2015-11-09)
Repositorygithub.com/tensorflow/tensorflow
Written inPython, C++, CUDA
PlatformLinux, macOS, Windows, Android, JavaScript[2]
TypeMachine learning library
LicenseApache License 2.0
Websitetensorflow.org

TensorFlow is a free and open-source software library for machine learning and artificial intelligence. It can be used across a range of tasks but has a particular focus on training and inference of deep neural networks.[3][4]

It was developed by the Google Brain team for Google's internal use in research and production.[5][6][7] The initial version was released under the Apache License 2.0 in 2015.[1][8] Google released an updated version, TensorFlow 2.0, in September 2019.[9]

TensorFlow can be used in a wide variety of programming languages, including Python, JavaScript, C++, and Java,[10] facilitating its use in a range of applications in many sectors.

History[edit]

DistBelief[edit]

Starting in 2011, Google Brain built DistBelief as a proprietary machine learning system based on deep learning neural networks. Its use grew rapidly across diverse Alphabet companies in both research and commercial applications.[11][12] Google assigned multiple computer scientists, including Jeff Dean, to simplify and refactor the codebase of DistBelief into a faster, more robust application-grade library, which became TensorFlow.[13] In 2009, the team, led by Geoffrey Hinton, had implemented generalized backpropagation and other improvements, which allowed generation of neural networks with substantially higher accuracy, for instance a 25% reduction in errors in speech recognition.[14]

TensorFlow[edit]

TensorFlow is Google Brain's second-generation system. Version 1.0.0 was released on February 11, 2017.[15] While the reference implementation runs on single devices, TensorFlow can run on multiple CPUs and GPUs (with optional CUDA and SYCL extensions for general-purpose computing on graphics processing units).[16] TensorFlow is available on 64-bit Linux, macOS, Windows, and mobile computing platforms including Android and iOS.[citation needed]

Its flexible architecture allows for the easy deployment of computation across a variety of platforms (CPUs, GPUs, TPUs), and from desktops to clusters of servers to mobile and edge devices.

TensorFlow computations are expressed as stateful dataflow graphs. The name TensorFlow derives from the operations that such neural networks perform on multidimensional data arrays, which are referred to as tensors.[17] During the Google I/O Conference in June 2016, Jeff Dean stated that 1,500 repositories on GitHub mentioned TensorFlow, of which only 5 were from Google.[18]

In March 2018, Google announced TensorFlow.js version 1.0 for machine learning in JavaScript.[19]

In Jan 2019, Google announced TensorFlow 2.0.[20] It became officially available in September 2019.[9]

In May 2019, Google announced TensorFlow Graphics for deep learning in computer graphics.[21]

Tensor processing unit (TPU)[edit]

In May 2016, Google announced its Tensor processing unit (TPU), an application-specific integrated circuit (ASIC, a hardware chip) built specifically for machine learning and tailored for TensorFlow. A TPU is a programmable AI accelerator designed to provide high throughput of low-precision arithmetic (e.g., 8-bit), and oriented toward using or running models rather than training them. Google announced they had been running TPUs inside their data centers for more than a year, and had found them to deliver an order of magnitude better-optimized performance per watt for machine learning.[22]

In May 2017, Google announced the second-generation, as well as the availability of the TPUs in Google Compute Engine.[23] The second-generation TPUs deliver up to 180 teraflops of performance, and when organized into clusters of 64 TPUs, provide up to 11.5 petaflops.[citation needed]

In May 2018, Google announced the third-generation TPUs delivering up to 420 teraflops of performance and 128 GB high bandwidth memory (HBM). Cloud TPU v3 Pods offer 100+ petaflops of performance and 32 TB HBM.[24]

In February 2018, Google announced that they were making TPUs available in beta on the Google Cloud Platform.[25]

Edge TPU[edit]

In July 2018, the Edge TPU was announced. Edge TPU is Google's purpose-built ASIC chip designed to run TensorFlow Lite machine learning (ML) models on small client computing devices such as smartphones[26] known as edge computing.

TensorFlow Lite[edit]

In May 2017, Google announced a software stack specifically for mobile development, TensorFlow Lite.[27] In January 2019, the TensorFlow team released a developer preview of the mobile GPU inference engine with OpenGL ES 3.1 Compute Shaders on Android devices and Metal Compute Shaders on iOS devices.[28] In May 2019, Google announced that their TensorFlow Lite Micro (also known as TensorFlow Lite for Microcontrollers) and ARM's uTensor would be merging.[29]

TensorFlow 2.0[edit]

As TensorFlow's market share among research papers was declining to the advantage of PyTorch,[30] the TensorFlow Team announced a release of a new major version of the library in September 2019. TensorFlow 2.0 introduced many changes, the most significant being TensorFlow eager, which changed the automatic differentiation scheme from the static computational graph to the "Define-by-Run" scheme originally made popular by Chainer and later PyTorch.[30] Other major changes included removal of old libraries, cross-compatibility between trained models on different versions of TensorFlow, and significant improvements to the performance on GPU.[31][non-primary source needed]

Features[edit]

AutoDifferentiation[edit]

AutoDifferentiation is the process of automatically calculating the gradient vector of a model with respect to each of its parameters. With this feature, TensorFlow can automatically compute the gradients for the parameters in a model, which is useful to algorithms such as backpropagation which require gradients to optimize performance.[32] To do so, the framework must keep track of the order of operations done to the input Tensors in a model, and then compute the gradients with respect to the appropriate parameters.[32]

Eager execution[edit]

TensorFlow includes an “eager execution” mode, which means that operations are evaluated immediately as opposed to being added to a computational graph which is executed later.[33] Code executed eagerly can be examined step-by step-through a debugger, since data is augmented at each line of code rather than later in a computational graph.[33] This execution paradigm is considered to be easier to debug because of its step by step transparency.[33]

Distribute[edit]

In both eager and graph executions, TensorFlow provides an API for distributing computation across multiple devices with various distribution strategies.[34] This distributed computing can often speed up the execution of training and evaluating of TensorFlow models and is a common practice in the field of AI.[34][35]

Losses[edit]

To train and assess models, TensorFlow provides a set of loss functions (also known as cost functions).[36] Some popular examples include mean squared error (MSE) and binary cross entropy (BCE).[36]

Metrics[edit]

In order to assess the performance of machine learning models, TensorFlow gives API access to commonly used metrics. Examples include various accuracy metrics (binary, categorical, sparse categorical) along with other metrics such as Precision, Recall, and Intersection-over-Union (IoU).[37]

TF.nn[edit]

TensorFlow.nn is a module for executing primitive neural network operations on models.[38] Some of these operations include variations of convolutions (1/2/3D, Atrous, depthwise), activation functions (Softmax, RELU, GELU, Sigmoid, etc.) and their variations, and other operations (max-pooling, bias-add, etc.).[38]

Optimizers[edit]

TensorFlow offers a set of optimizers for training neural networks, including ADAM, ADAGRAD, and Stochastic Gradient Descent (SGD).[39] When training a model, different optimizers offer different modes of parameter tuning, often affecting a model's convergence and performance.[40]

Usage and extensions[edit]

TensorFlow[edit]

TensorFlow serves as a core platform and library for machine learning. TensorFlow's APIs use Keras to allow users to make their own machine-learning models.[41] In addition to building and training their model, TensorFlow can also help load the data to train the model, and deploy it using TensorFlow Serving.[42]

TensorFlow provides a stable Python Application Program Interface (API),[43] as well as APIs without backwards compatibility guarantee for Javascript,[44] C++,[45] and Java.[46][10] Third-party language binding packages are also available for C#,[47][48] Haskell,[49] Julia,[50] MATLAB,[51] Object Pascal,[52] R,[53] Scala,[54] Rust,[55] OCaml,[56] and Crystal.[57] Bindings that are now archived and unsupported include Go[58] and Swift.[59]

TensorFlow.js[edit]

TensorFlow also has a library for machine learning in JavaScript. Using the provided JavaScript APIs, TensorFlow.js allows users to use either Tensorflow.js models or converted models from TensorFlow or TFLite, retrain the given models, and run on the web.[42][60]

TFLite[edit]

TensorFlow Lite has APIs for mobile apps or embedded devices to generate and deploy TensorFlow models.[61] These models are compressed and optimized in order to be more efficient and have a higher performance on smaller capacity devices.[62]

TensorFlow Lite uses FlatBuffers as the data serialization format for network models, eschewing the Protocol Buffers format used by standard TensorFlow models.[62]

TFX[edit]

TensorFlow Extended (abbrev. TFX) provides numerous components to perform all the operations needed for end-to-end production.[63] Components include loading, validating, and transforming data, tuning, training, and evaluating the machine learning model, and pushing the model itself into production.[42][63]

Integrations[edit]

Numpy[edit]

Numpy is one of the most popular Python data libraries, and TensorFlow offers integration and compatibility with its data structures.[64] Numpy NDarrays, the library's native datatype, are automatically converted to TensorFlow Tensors in TF operations; the same is also true vice versa.[64] This allows for the two libraries to work in unison without requiring the user to write explicit data conversions. Moreover, the integration extends to memory optimization by having TF Tensors share the underlying memory representations of Numpy NDarrays whenever possible.[64]

Extensions[edit]

TensorFlow also offers a variety of libraries and extensions to advance and extend the models and methods used.[65] For example, TensorFlow Recommenders and TensorFlow Graphics are libraries for their respective functionalities in recommendation systems and graphics, TensorFlow Federated provides a framework for decentralized data, and TensorFlow Cloud allows users to directly interact with Google Cloud to integrate their local code to Google Cloud.[66] Other add-ons, libraries, and frameworks include TensorFlow Model Optimization, TensorFlow Probability, TensorFlow Quantum, and TensorFlow Decision Forests.[65][66]

Google Colab[edit]

Google also released Colaboratory, a TensorFlow Jupyter notebook environment that does not require any setup.[67] It runs on Google Cloud and allows users free access to GPUs and the ability to store and share notebooks on Google Drive.[68]

Google JAX[edit]

Google JAX is a machine learning framework for transforming numerical functions.[69][70][71] It is described as bringing together a modified version of autograd (automatic obtaining of the gradient function through differentiation of a function) and TensorFlow's XLA (Accelerated Linear Algebra). It is designed to follow the structure and workflow of NumPy as closely as possible and works with TensorFlow as well as other frameworks such as PyTorch. The primary functions of JAX are:[69]

  1. grad: automatic differentiation
  2. jit: compilation
  3. vmap: auto-vectorization
  4. pmap: SPMD programming

Applications[edit]

Medical[edit]

GE Healthcare used TensorFlow to increase the speed and accuracy of MRIs in identifying specific body parts.[72] Google used TensorFlow to create DermAssist, a free mobile application that allows users to take pictures of their skin and identify potential health complications.[73] Sinovation Ventures used TensorFlow to identify and classify eye diseases from optical coherence tomography (OCT) scans.[73]

Social media[edit]

Twitter implemented TensorFlow to rank tweets by importance for a given user, and changed their platform to show tweets in order of this ranking.[74] Previously, tweets were simply shown in reverse chronological order.[74] The photo sharing app VSCO used TensorFlow to help suggest custom filters for photos.[73]

Search Engine[edit]

Google officially released RankBrain on October 26, 2015, backed by TensorFlow.[75]

Education[edit]

InSpace, a virtual learning platform, used TensorFlow to filter out toxic chat messages in classrooms.[76] Liulishuo, an online English learning platform, utilized TensorFlow to create an adaptive curriculum for each student.[77] TensorFlow was used to accurately assess a student's current abilities, and also helped decide the best future content to show based on those capabilities.[77]

Retail[edit]

The e-commerce platform Carousell used TensorFlow to provide personalized recommendations for customers.[73] The cosmetics company ModiFace used TensorFlow to create an augmented reality experience for customers to test various shades of make-up on their face.[78]

2016 comparison of original photo (left) and with TensorFlow neural style applied (right)

Research[edit]

TensorFlow is the foundation for the automated image-captioning software DeepDream.[79]

See also[edit]

References[edit]

General[edit]

  • Moroney, Laurence (October 1, 2020). AI and Machine Learning for Coders (1st ed.). O'Reilly Media. p. 365. ISBN 9781492078197. Archived from the original on June 7, 2021. Retrieved December 21, 2020.
  • Géron, Aurélien (October 15, 2019). Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow (2nd ed.). O'Reilly Media. p. 856. ISBN 9781492032632. Archived from the original on May 1, 2021. Retrieved November 25, 2019.
  • Ramsundar, Bharath; Zadeh, Reza Bosagh (March 23, 2018). TensorFlow for Deep Learning (1st ed.). O'Reilly Media. p. 256. ISBN 9781491980446. Archived from the original on June 7, 2021. Retrieved November 25, 2019.
  • Hope, Tom; Resheff, Yehezkel S.; Lieder, Itay (August 27, 2017). Learning TensorFlow: A Guide to Building Deep Learning Systems (1st ed.). O'Reilly Media. p. 242. ISBN 9781491978504. Archived from the original on March 8, 2021. Retrieved November 25, 2019.
  • Shukla, Nishant (February 12, 2018). Machine Learning with TensorFlow (1st ed.). Manning Publications. p. 272. ISBN 9781617293870.

Citations[edit]

  1. ^ a b "Credits". TensorFlow.org. Archived from the original on November 17, 2015. Retrieved November 10, 2015.
  2. ^ "TensorFlow.js". Archived from the original on May 6, 2018. Retrieved June 28, 2018.
  3. ^ Abadi, Martín; Barham, Paul; Chen, Jianmin; Chen, Zhifeng; Davis, Andy; Dean, Jeffrey; Devin, Matthieu; Ghemawat, Sanjay; Irving, Geoffrey; Isard, Michael; Kudlur, Manjunath; Levenberg, Josh; Monga, Rajat; Moore, Sherry; Murray, Derek G.; Steiner, Benoit; Tucker, Paul; Vasudevan, Vijay; Warden, Pete; Wicke, Martin; Yu, Yuan; Zheng, Xiaoqiang (2016). TensorFlow: A System for Large-Scale Machine Learning (PDF). Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI ’16). arXiv:1605.08695. Archived (PDF) from the original on December 12, 2020. Retrieved October 26, 2020.
  4. ^ TensorFlow: Open source machine learning. Google. 2015. Archived from the original on November 11, 2021. "It is machine learning software being used for various kinds of perceptual and language understanding tasks" – Jeffrey Dean, minute 0:47 / 2:17 from YouTube clip
  5. ^ Video clip by Google about TensorFlow 2015 at minute 0:15/2:17
  6. ^ Video clip by Google about TensorFlow 2015 at minute 0:26/2:17
  7. ^ Dean et al 2015, p. 2
  8. ^ Metz, Cade (November 9, 2015). "Google Just Open Sourced TensorFlow, Its Artificial Intelligence Engine". Wired. Archived from the original on November 9, 2015. Retrieved November 10, 2015.
  9. ^ a b TensorFlow (September 30, 2019). "TensorFlow 2.0 is now available!". Medium. Archived from the original on October 7, 2019. Retrieved November 24, 2019.
  10. ^ a b "API Documentation". Archived from the original on November 16, 2015. Retrieved June 27, 2018.,
  11. ^ Dean, Jeff; Monga, Rajat; et al. (November 9, 2015). "TensorFlow: Large-scale machine learning on heterogeneous systems" (PDF). TensorFlow.org. Google Research. Archived (PDF) from the original on November 20, 2015. Retrieved November 10, 2015.
  12. ^ Perez, Sarah (November 9, 2015). "Google Open-Sources The Machine Learning Tech Behind Google Photos Search, Smart Reply And More". TechCrunch. Archived from the original on November 9, 2015. Retrieved November 11, 2015.
  13. ^ Oremus, Will (November 9, 2015). "What Is TensorFlow, and Why Is Google So Excited About It?". Slate. Archived from the original on November 10, 2015. Retrieved November 11, 2015.
  14. ^ Ward-Bailey, Jeff (November 25, 2015). "Google chairman: We're making 'real progress' on artificial intelligence". CSMonitor. Archived from the original on September 16, 2015. Retrieved November 25, 2015.
  15. ^ TensorFlow Developers (2022). "Tensorflow Release 1.0.0". GitHub. doi:10.5281/zenodo.4724125. Archived from the original on February 27, 2021. Retrieved July 24, 2017.
  16. ^ Metz, Cade (November 10, 2015). "TensorFlow, Google's Open Source AI, Points to a Fast-Changing Hardware World". Wired. Archived from the original on November 11, 2015. Retrieved November 11, 2015.
  17. ^ "Introduction to tensors". tensorflow.org. Archived from the original on May 26, 2024. Retrieved March 3, 2024.
  18. ^ Machine Learning: Google I/O 2016 Minute 07:30/44:44 Archived December 21, 2016, at the Wayback Machine accessdate=2016-06-05
  19. ^ TensorFlow (March 30, 2018). "Introducing TensorFlow.js: Machine Learning in Javascript". Medium. Archived from the original on March 30, 2018. Retrieved May 24, 2019.
  20. ^ TensorFlow (January 14, 2019). "What's coming in TensorFlow 2.0". Medium. Archived from the original on January 14, 2019. Retrieved May 24, 2019.
  21. ^ TensorFlow (May 9, 2019). "Introducing TensorFlow Graphics: Computer Graphics Meets Deep Learning". Medium. Archived from the original on May 9, 2019. Retrieved May 24, 2019.
  22. ^ Jouppi, Norm. "Google supercharges machine learning tasks with TPU custom chip". Google Cloud Platform Blog. Archived from the original on May 18, 2016. Retrieved May 19, 2016.
  23. ^ "Build and train machine learning models on our new Google Cloud TPUs". Google. May 17, 2017. Archived from the original on May 17, 2017. Retrieved May 18, 2017.
  24. ^ "Cloud TPU". Google Cloud. Archived from the original on May 17, 2017. Retrieved May 24, 2019.
  25. ^ "Cloud TPU machine learning accelerators now available in beta". Google Cloud Platform Blog. Archived from the original on February 12, 2018. Retrieved February 12, 2018.
  26. ^ Kundu, Kishalaya (July 26, 2018). "Google Announces Edge TPU, Cloud IoT Edge at Cloud Next 2018". Beebom. Archived from the original on May 26, 2024. Retrieved February 2, 2019.
  27. ^ "Google's new machine learning framework is going to put more AI on your phone". May 17, 2017. Archived from the original on May 17, 2017. Retrieved May 19, 2017.
  28. ^ TensorFlow (January 16, 2019). "TensorFlow Lite Now Faster with Mobile GPUs (Developer Preview)". Medium. Archived from the original on January 16, 2019. Retrieved May 24, 2019.
  29. ^ "uTensor and Tensor Flow Announcement | Mbed". os.mbed.com. Archived from the original on May 9, 2019. Retrieved May 24, 2019.
  30. ^ a b He, Horace (October 10, 2019). "The State of Machine Learning Frameworks in 2019". The Gradient. Archived from the original on October 10, 2019. Retrieved May 22, 2020.
  31. ^ "TensorFlow 2.0 is now available!". TensorFlow Blog. September 30, 2019. Archived from the original on October 30, 2019. Retrieved May 22, 2020.
  32. ^ a b "Introduction to gradients and automatic differentiation". TensorFlow. Archived from the original on October 28, 2021. Retrieved November 4, 2021.
  33. ^ a b c "Eager execution | TensorFlow Core". TensorFlow. Archived from the original on November 4, 2021. Retrieved November 4, 2021.
  34. ^ a b "Module: tf.distribute | TensorFlow Core v2.6.1". TensorFlow. Archived from the original on May 26, 2024. Retrieved November 4, 2021.
  35. ^ Sigeru., Omatu (2014). Distributed Computing and Artificial Intelligence, 11th International Conference. Springer International Publishing. ISBN 978-3-319-07593-8. OCLC 980886715. Archived from the original on May 26, 2024. Retrieved November 4, 2021.
  36. ^ a b "Module: tf.losses | TensorFlow Core v2.6.1". TensorFlow. Archived from the original on October 27, 2021. Retrieved November 4, 2021.
  37. ^ "Module: tf.metrics | TensorFlow Core v2.6.1". TensorFlow. Archived from the original on November 4, 2021. Retrieved November 4, 2021.
  38. ^ a b "Module: tf.nn | TensorFlow Core v2.7.0". TensorFlow. Archived from the original on May 26, 2024. Retrieved November 6, 2021.
  39. ^ "Module: tf.optimizers | TensorFlow Core v2.7.0". TensorFlow. Archived from the original on October 30, 2021. Retrieved November 6, 2021.
  40. ^ Dogo, E. M.; Afolabi, O. J.; Nwulu, N. I.; Twala, B.; Aigbavboa, C. O. (December 2018). "A Comparative Analysis of Gradient Descent-Based Optimization Algorithms on Convolutional Neural Networks". 2018 International Conference on Computational Techniques, Electronics and Mechanical Systems (CTEMS). pp. 92–99. doi:10.1109/CTEMS.2018.8769211. ISBN 978-1-5386-7709-4. S2CID 198931032. Archived from the original on May 26, 2024. Retrieved July 25, 2023.
  41. ^ "TensorFlow Core | Machine Learning for Beginners and Experts". TensorFlow. Archived from the original on January 20, 2023. Retrieved November 4, 2021.
  42. ^ a b c "Introduction to TensorFlow". TensorFlow. Archived from the original on January 20, 2023. Retrieved October 28, 2021.
  43. ^ "All symbols in TensorFlow 2 | TensorFlow Core v2.7.0". TensorFlow. Archived from the original on November 6, 2021. Retrieved November 6, 2021.
  44. ^ "TensorFlow.js". js.tensorflow.org. Archived from the original on May 26, 2024. Retrieved November 6, 2021.
  45. ^ "TensorFlow C++ API Reference | TensorFlow Core v2.7.0". TensorFlow. Archived from the original on January 20, 2023. Retrieved November 6, 2021.
  46. ^ "org.tensorflow | Java". TensorFlow. Archived from the original on November 6, 2021. Retrieved November 6, 2021.
  47. ^ Icaza, Miguel de (February 17, 2018). "TensorFlowSharp: TensorFlow API for .NET languages". GitHub. Archived from the original on July 24, 2017. Retrieved February 18, 2018.
  48. ^ Chen, Haiping (December 11, 2018). "TensorFlow.NET: .NET Standard bindings for TensorFlow". GitHub. Archived from the original on July 12, 2019. Retrieved December 11, 2018.
  49. ^ "haskell: Haskell bindings for TensorFlow". tensorflow. February 17, 2018. Archived from the original on July 24, 2017. Retrieved February 18, 2018.
  50. ^ Malmaud, Jon (August 12, 2019). "A Julia wrapper for TensorFlow". GitHub. Archived from the original on July 24, 2017. Retrieved August 14, 2019. operations like sin, * (matrix multiplication), .* (element-wise multiplication), etc [..]. Compare to Python, which requires learning specialized namespaced functions like tf.matmul.
  51. ^ "A MATLAB wrapper for TensorFlow Core". GitHub. November 3, 2019. Archived from the original on September 14, 2020. Retrieved February 13, 2020.
  52. ^ "Use TensorFlow from Pascal (FreePascal, Lazarus, etc.)". GitHub. January 19, 2023. Archived from the original on January 20, 2023. Retrieved January 20, 2023.
  53. ^ "tensorflow: TensorFlow for R". RStudio. February 17, 2018. Archived from the original on January 4, 2017. Retrieved February 18, 2018.
  54. ^ Platanios, Anthony (February 17, 2018). "tensorflow_scala: TensorFlow API for the Scala Programming Language". GitHub. Archived from the original on February 18, 2019. Retrieved February 18, 2018.
  55. ^ "rust: Rust language bindings for TensorFlow". tensorflow. February 17, 2018. Archived from the original on July 24, 2017. Retrieved February 18, 2018.
  56. ^ Mazare, Laurent (February 16, 2018). "tensorflow-ocaml: OCaml bindings for TensorFlow". GitHub. Archived from the original on June 11, 2018. Retrieved February 18, 2018.
  57. ^ "fazibear/tensorflow.cr". GitHub. Archived from the original on June 27, 2018. Retrieved October 10, 2018.
  58. ^ "tensorflow package - github.com/tensorflow/tensorflow/tensorflow/go - pkg.go.dev". pkg.go.dev. Archived from the original on November 6, 2021. Retrieved November 6, 2021.
  59. ^ "Swift for TensorFlow (In Archive Mode)". TensorFlow. Archived from the original on November 6, 2021. Retrieved November 6, 2021.
  60. ^ "TensorFlow.js | Machine Learning for JavaScript Developers". TensorFlow. Archived from the original on November 4, 2021. Retrieved October 28, 2021.
  61. ^ "TensorFlow Lite | ML for Mobile and Edge Devices". TensorFlow. Archived from the original on November 4, 2021. Retrieved November 1, 2021.
  62. ^ a b "TensorFlow Lite". TensorFlow. Archived from the original on November 2, 2021. Retrieved November 1, 2021.
  63. ^ a b "TensorFlow Extended (TFX) | ML Production Pipelines". TensorFlow. Archived from the original on November 4, 2021. Retrieved November 2, 2021.
  64. ^ a b c "Customization basics: tensors and operations | TensorFlow Core". TensorFlow. Archived from the original on November 6, 2021. Retrieved November 6, 2021.
  65. ^ a b "Guide | TensorFlow Core". TensorFlow. Archived from the original on July 17, 2019. Retrieved November 4, 2021.
  66. ^ a b "Libraries & extensions". TensorFlow. Archived from the original on November 4, 2021. Retrieved November 4, 2021.
  67. ^ "Colaboratory – Google". research.google.com. Archived from the original on October 24, 2017. Retrieved November 10, 2018.
  68. ^ "Google Colaboratory". colab.research.google.com. Archived from the original on February 3, 2021. Retrieved November 6, 2021.
  69. ^ a b Bradbury, James; Frostig, Roy; Hawkins, Peter; Johnson, Matthew James; Leary, Chris; MacLaurin, Dougal; Necula, George; Paszke, Adam; Vanderplas, Jake; Wanderman-Milne, Skye; Zhang, Qiao (June 18, 2022), "JAX: Autograd and XLA", Astrophysics Source Code Library, Google, Bibcode:2021ascl.soft11002B, archived from the original on June 18, 2022, retrieved June 18, 2022
  70. ^ "Using JAX to accelerate our research". www.deepmind.com. Archived from the original on June 18, 2022. Retrieved June 18, 2022.
  71. ^ "Why is Google's JAX so popular?". Analytics India Magazine. April 25, 2022. Archived from the original on June 18, 2022. Retrieved June 18, 2022.
  72. ^ "Intelligent Scanning Using Deep Learning for MRI". Archived from the original on November 4, 2021. Retrieved November 4, 2021.
  73. ^ a b c d "Case Studies and Mentions". TensorFlow. Archived from the original on October 26, 2021. Retrieved November 4, 2021.
  74. ^ a b "Ranking Tweets with TensorFlow". Archived from the original on November 4, 2021. Retrieved November 4, 2021.
  75. ^ 3.5kshares; 72kreads (September 2, 2020). "A Complete Guide to the Google RankBrain Algorithm". Search Engine Journal. Archived from the original on November 6, 2021. Retrieved November 6, 2021.{{cite web}}: CS1 maint: numeric names: authors list (link)
  76. ^ "InSpace: A new video conferencing platform that uses TensorFlow.js for toxicity filters in chat". Archived from the original on November 4, 2021. Retrieved November 4, 2021.
  77. ^ a b Xulin. "流利说基于 TensorFlow 的自适应系统实践". Weixin Official Accounts Platform. Archived from the original on November 6, 2021. Retrieved November 4, 2021.
  78. ^ "How Modiface utilized TensorFlow.js in production for AR makeup try on in the browser". Archived from the original on November 4, 2021. Retrieved November 4, 2021.
  79. ^ Byrne, Michael (November 11, 2015). "Google Offers Up Its Entire Machine Learning Library as Open-Source Software". Vice. Archived from the original on January 25, 2021. Retrieved November 11, 2015.

External links[edit]