Examine individual changes

This page allows you to examine the variables generated by the Edit Filter for an individual change.

Variables generated for this change

VariableValue
Edit count of the user ($1) (user_editcount)
null
Name of the user account ($1) (user_name)
'2001:EE0:183:81A0:40E0:66F2:A9E6:E1E'
Age of the user account ($1) (user_age)
0
Groups (including implicit) the user is in ($1) (user_groups)
[ 0 => '*' ]
Rights that the user has ($1) (user_rights)
[ 0 => 'createaccount', 1 => 'read', 2 => 'edit', 3 => 'createtalk', 4 => 'writeapi', 5 => 'viewmyprivateinfo', 6 => 'editmyprivateinfo', 7 => 'editmyoptions', 8 => 'abusefilter-log-detail', 9 => 'urlshortener-create-url', 10 => 'centralauth-merge', 11 => 'abusefilter-view', 12 => 'abusefilter-log', 13 => 'vipsscaler-test' ]
Whether or not a user is editing through the mobile interface ($1) (user_mobile)
false
Whether the user is editing from mobile app ($1) (user_app)
false
Page ID ($1) (page_id)
16779711
Page namespace ($1) (page_namespace)
0
Page title without namespace ($1) (page_title)
'Advanced Vector Extensions'
Full page title ($1) (page_prefixedtitle)
'Advanced Vector Extensions'
Edit protection level of the page ($1) (page_restrictions_edit)
[]
Last ten users to contribute to the page ($1) (page_recent_contributors)
[ 0 => 'Skynxnex', 1 => '2001:EE0:183:81A0:40E0:66F2:A9E6:E1E', 2 => '2A02:2168:84F0:1800:5143:307D:4910:F87A', 3 => '2A02:2168:84F0:1800:9E85:CC54:9319:7754', 4 => '2A02:2168:84F0:1800:C209:663B:AFE3:C5DE', 5 => 'BilCat', 6 => '120.88.147.117', 7 => '203.206.77.207', 8 => '49.180.185.156', 9 => '2A0B:7080:20:0:0:0:1:9837' ]
Page age in seconds ($1) (page_age)
508077678
Action ($1) (action)
'edit'
Edit summary/reason ($1) (summary)
'/* {{Anchor|AVX2}}Advanced Vector Extensions 2 */ '
Time since last page edit in seconds ($1) (page_last_edit_age)
173
Old content model ($1) (old_content_model)
'wikitext'
New content model ($1) (new_content_model)
'wikitext'
Old page wikitext, before the edit ($1) (old_wikitext)
'{{Short description|Extensions to the x86 instruction set architecture for microprocessors from Intel and AMD}} {{Use mdy dates|date=September 2018}} '''Advanced Vector Extensions''' ('''AVX''', also known as '''Gesher New Instructions''' and then '''Sandy Bridge New Instructions''') are [[Single instruction, multiple data|SIMD]] extensions to the [[x86]] [[instruction set architecture]] for [[microprocessor]]s from [[Intel]] and [[Advanced Micro Devices]] (AMD). They were proposed by Intel in March 2008 and first supported by Intel with the [[Sandy Bridge]]<ref name="If1e5">{{Cite web|url=http://www.realworldtech.com/sandy-bridge/6/|title=Intel's Sandy Bridge Microarchitecture|last=Kanter|first=David|date=September 25, 2010|website=www.realworldtech.com|language=en-US|access-date=February 17, 2018}}</ref> processor shipping in Q1 2011 and later by AMD with the [[Bulldozer (microarchitecture)|Bulldozer]]<ref name="TSJKX">{{Cite news|url=https://www.extremetech.com/computing/100583-analyzing-bulldozers-scaling-single-thread-performance/4|title=Analyzing Bulldozer: Why AMD's chip is so disappointing - Page 4 of 5 - ExtremeTech|last=Hruska|first=Joel|date=October 24, 2011|work=ExtremeTech|access-date=February 17, 2018|language=en-US}}</ref> processor shipping in Q3 2011. AVX provides new features, new instructions, and a new coding scheme. '''AVX2''' (also known as '''Haswell New Instructions''') expands most integer commands to 256 bits and introduces new instructions. They were first supported by Intel with the [[Haswell (microarchitecture)|Haswell]] processor, which shipped in 2013. [[AVX-512]] expands AVX to 512-bit support using a new [[EVEX prefix]] encoding proposed by Intel in July 2013 and first supported by Intel with the [[Knights Landing (microarchitecture)|Knights Landing]] co-processor, which shipped in 2016.<ref name="reinders512">{{citation |url= http://software.intel.com/en-us/blogs/2013/avx-512-instructions |title= AVX-512 Instructions |date= July 23, 2013 |author= James Reinders |publisher= [[Intel]] |access-date= August 20, 2013}}</ref><ref name="8JgOG">{{Cite news|url=http://ark.intel.com/products/94033/Intel-Xeon-Phi-Processor-7210-16GB-1_30-GHz-64-core|title=Intel Xeon Phi Processor 7210 (16GB, 1.30 GHz, 64 core) Product Specifications|access-date=March 16, 2018|newspaper=Intel ARK (Product Specs)}}</ref> In conventional processors, AVX-512 was introduced with [[Skylake (microarchitecture)|Skylake]] server and HEDT processors in 2017. == {{Anchor|AVX1}}Advanced Vector Extensions == AVX uses sixteen YMM registers to perform a single instruction on multiple pieces of data (see [[SIMD]]). Each YMM register can hold and do simultaneous operations (math) on: * eight 32-bit single-precision floating point numbers or * four 64-bit double-precision floating point numbers. The width of the SIMD registers is increased from 128 bits to 256 bits, and renamed from XMM0–XMM7 to YMM0–YMM7 (in [[x86-64]] mode, from XMM0–XMM15 to YMM0–YMM15). The legacy [[Streaming SIMD Extensions|SSE]] instructions can be still utilized via the [[VEX prefix]] to operate on the lower 128 bits of the YMM registers. {| class="wikitable floatright" style="margin-left: 1rem; text-align: center; line-height: normal" |+ style="margin-bottom: 0.2em; font-size: small;" | AVX-512 register scheme as extension from the AVX (YMM0-YMM15) and SSE (XMM0-XMM15) registers |- | style="width: 50%; border: none; font-size: xx-small;" | <span style="float: left">511</span> <span style="float: right">256</span> | style="width: 25%; border: none; font-size: xx-small;" | <span style="float: left">255</span> <span style="float: right">128</span> | style="width: 25%; border: none; font-size: xx-small;" | <span style="float: left">127</span> <span style="float: right">0</span> |- | style="border-top: none" | | style="border-top: none" | | style="border-top: none" | |- | style="padding: 0" | &nbsp;&nbsp;ZMM0&nbsp;&nbsp; | style="padding: 0; background: #ddd" | &nbsp;&nbsp;YMM0&nbsp;&nbsp; | style="padding: 0; background: #ccc" | &nbsp;&nbsp;XMM0&nbsp;&nbsp; |- | style="padding: 0" | ZMM1 | style="padding: 0; background: #ddd" | YMM1 | style="padding: 0; background: #ccc" | XMM1 |- | style="padding: 0" | ZMM2 | style="padding: 0; background: #ddd" | YMM2 | style="padding: 0; background: #ccc" | XMM2 |- | style="padding: 0" | ZMM3 | style="padding: 0; background: #ddd" | YMM3 | style="padding: 0; background: #ccc" | XMM3 |- | style="padding: 0" | ZMM4 | style="padding: 0; background: #ddd" | YMM4 | style="padding: 0; background: #ccc" | XMM4 |- | style="padding: 0" | ZMM5 | style="padding: 0; background: #ddd" | YMM5 | style="padding: 0; background: #ccc" | XMM5 |- | style="padding: 0" | ZMM6 | style="padding: 0; background: #ddd" | YMM6 | style="padding: 0; background: #ccc" | XMM6 |- | style="padding: 0" | ZMM7 | style="padding: 0; background: #ddd" | YMM7 | style="padding: 0; background: #ccc" | XMM7 |- | style="padding: 0" | ZMM8 | style="padding: 0; background: #ddd" | YMM8 | style="padding: 0; background: #ccc" | XMM8 |- | style="padding: 0" | ZMM9 | style="padding: 0; background: #ddd" | YMM9 | style="padding: 0; background: #ccc" | XMM9 |- | style="padding: 0" | ZMM10 | style="padding: 0; background: #ddd" | YMM10 | style="padding: 0; background: #ccc" | XMM10 |- | style="padding: 0" | ZMM11 | style="padding: 0; background: #ddd" | YMM11 | style="padding: 0; background: #ccc" | XMM11 |- | style="padding: 0" | ZMM12 | style="padding: 0; background: #ddd" | YMM12 | style="padding: 0; background: #ccc" | XMM12 |- | style="padding: 0" | ZMM13 | style="padding: 0; background: #ddd" | YMM13 | style="padding: 0; background: #ccc" | XMM13 |- | style="padding: 0" | ZMM14 | style="padding: 0; background: #ddd" | YMM14 | style="padding: 0; background: #ccc" | XMM14 |- | style="padding: 0" | ZMM15 | style="padding: 0; background: #ddd" | YMM15 | style="padding: 0; background: #ccc" | XMM15 |- | style="padding: 0" | ZMM16 | style="padding: 0" | YMM16 | style="padding: 0" | XMM16 |- | style="padding: 0" | ZMM17 | style="padding: 0" | YMM17 | style="padding: 0" | XMM17 |- | style="padding: 0" | ZMM18 | style="padding: 0" | YMM18 | style="padding: 0" | XMM18 |- | style="padding: 0" | ZMM19 | style="padding: 0" | YMM19 | style="padding: 0" | XMM19 |- | style="padding: 0" | ZMM20 | style="padding: 0" | YMM20 | style="padding: 0" | XMM20 |- | style="padding: 0" | ZMM21 | style="padding: 0" | YMM21 | style="padding: 0" | XMM21 |- | style="padding: 0" | ZMM22 | style="padding: 0" | YMM22 | style="padding: 0" | XMM22 |- | style="padding: 0" | ZMM23 | style="padding: 0" | YMM23 | style="padding: 0" | XMM23 |- | style="padding: 0" | ZMM24 | style="padding: 0" | YMM24 | style="padding: 0" | XMM24 |- | style="padding: 0" | ZMM25 | style="padding: 0" | YMM25 | style="padding: 0" | XMM25 |- | style="padding: 0" | ZMM26 | style="padding: 0" | YMM26 | style="padding: 0" | XMM26 |- | style="padding: 0" | ZMM27 | style="padding: 0" | YMM27 | style="padding: 0" | XMM27 |- | style="padding: 0" | ZMM28 | style="padding: 0" | YMM28 | style="padding: 0" | XMM28 |- | style="padding: 0" | ZMM29 | style="padding: 0" | YMM29 | style="padding: 0" | XMM29 |- | style="padding: 0" | ZMM30 | style="padding: 0" | YMM30 | style="padding: 0" | XMM30 |- | style="padding: 0" | ZMM31 | style="padding: 0" | YMM31 | style="padding: 0" | XMM31 |} AVX introduces a three-operand SIMD instruction format called [[VEX coding scheme]], where the destination register is distinct from the two source operands. For example, an [[Streaming SIMD Extensions|SSE]] instruction using the conventional two-operand form {{nowrap|''a'' ← ''a'' + ''b''}} can now use a non-destructive three-operand form {{nowrap|''c'' ← ''a'' + ''b''}}, preserving both source operands. Originally, AVX's three-operand format was limited to the instructions with SIMD operands (YMM), and did not include instructions with general purpose registers (e.g. EAX). It was later used for coding new instructions on general purpose registers in later extensions, such as [[bit manipulation instruction set|BMI]]. VEX coding is also used for instructions operating on the k0-k7 mask registers that were introduced with [[AVX-512]]. The [[data structure alignment|alignment]] requirement of SIMD memory operands is relaxed.<ref name="intel_vol1">{{cite book|title=Intel 64 and IA-32 Architectures Software Developer's Manual Volume 1: Basic Architecture|publisher=Intel Corporation|page=349|edition=-051US|url=http://www.intel.com/content/dam/www/public/us/en/documents/manuals/64-ia-32-architectures-software-developer-vol-1-manual.pdf|access-date=August 23, 2014|chapter=14.9|quote=Memory arguments for most instructions with VEX prefix operate normally without causing #GP(0) on any byte-granularity alignment (unlike Legacy SSE instructions).}}</ref> Unlike their non-VEX coded counterparts, most VEX coded vector instructions no longer require their memory operands to be aligned to the vector size. Notably, the <code>VMOVDQA</code> instruction still requires its memory operand to be aligned. The new [[VEX coding scheme]] introduces a new set of code prefixes that extends the [[opcode]] space, allows instructions to have more than two operands, and allows SIMD vector registers to be longer than 128 bits. The VEX prefix can also be used on the legacy SSE instructions giving them a three-operand form, and making them interact more efficiently with AVX instructions without the need for <code>VZEROUPPER</code> and <code>VZEROALL</code>. The AVX instructions support both 128-bit and 256-bit SIMD. The 128-bit versions can be useful to improve old code without needing to widen the vectorization, and avoid the penalty of going from SSE to AVX, they are also faster on some early AMD implementations of AVX. This mode is sometimes known as AVX-128.<ref name="XEwSu">{{cite web|url=https://gcc.gnu.org/onlinedocs/gcc-4.8.2/gcc/i386-and-x86-64-Options.html#i386-and-x86-64-Options|title=i386 and x86-64 Options - Using the GNU Compiler Collection (GCC)|access-date=February 9, 2014}}</ref> === New instructions === These AVX instructions are in addition to the ones that are 256-bit extensions of the legacy 128-bit SSE instructions; most are usable on both 128-bit and 256-bit operands. {| class="wikitable" |- ! Instruction ! Description |- | <code>VBROADCASTSS</code>, <code>VBROADCASTSD</code>, <code>VBROADCASTF128</code> | Copy a 32-bit, 64-bit or 128-bit memory operand to all elements of a XMM or YMM vector register. |- | <code>VINSERTF128</code> | Replaces either the lower half or the upper half of a 256-bit YMM register with the value of a 128-bit source operand. The other half of the destination is unchanged. |- | <code>VEXTRACTF128</code> | Extracts either the lower half or the upper half of a 256-bit YMM register and copies the value to a 128-bit destination operand. |- | <code>VMASKMOVPS</code>, <code>VMASKMOVPD</code> | Conditionally reads any number of elements from a SIMD vector memory operand into a destination register, leaving the remaining vector elements unread and setting the corresponding elements in the destination register to zero. Alternatively, conditionally writes any number of elements from a SIMD vector register operand to a vector memory operand, leaving the remaining elements of the memory operand unchanged. On the AMD Jaguar processor architecture, this instruction with a memory source operand takes more than 300 clock cycles when the mask is zero, in which case the instruction should do nothing. This appears to be a design flaw.<ref name="wNTlH">{{cite web |url=http://www.agner.org/optimize/microarchitecture.pdf |title=The microarchitecture of Intel, AMD and VIA CPUs: An optimization guide for assembly programmers and compiler makers |access-date=October 17, 2016}}</ref> |- | <code>VPERMILPS</code>, <code>VPERMILPD</code> | Permute In-Lane. Shuffle the 32-bit or 64-bit vector elements of one input operand. These are in-lane 256-bit instructions, meaning that they operate on all 256 bits with two separate 128-bit shuffles, so they can not shuffle across the 128-bit lanes.<ref name="sK2vQ">{{cite web |url=https://chessprogramming.wikispaces.com/AVX2 |title=Chess programming AVX2 |access-date=October 17, 2016 |archive-date=July 10, 2017 |archive-url=https://web.archive.org/web/20170710034021/http://chessprogramming.wikispaces.com/AVX2 |url-status=dead}}</ref> |- | <code>VPERM2F128</code> | Shuffle the four 128-bit vector elements of two 256-bit source operands into a 256-bit destination operand, with an immediate constant as selector. |- | <code>VTESTPS</code>, <code>VTESTPD</code> | Packed bit test of the packed single-precision or double-precision floating-point sign bits, setting or clearing the ZF flag based on AND and CF flag based on ANDN. |- | <code>VZEROALL</code> | Set all YMM registers to zero and tag them as unused. Used when switching between 128-bit use and 256-bit use. |- | <code>VZEROUPPER</code> | Set the upper half of all YMM registers to zero. Used when switching between 128-bit use and 256-bit use. |} === CPUs with AVX === * [[Intel Corporation|Intel]] ** [[Sandy Bridge]] processors (Q1 2011) and newer, except models branded as Celeron and Pentium.<ref name="fh09g">{{Cite web |url= http://www.extremetech.com/computing/80772-intel-offers-peek-at-nehalem-and-larrabee |title= Intel Offers Peek at Nehalem and Larrabee |date= March 17, 2008 |publisher= ExtremeTech}}</ref> ** Pentium and Celeron branded processors starting with [[Tiger Lake (microprocessor)|Tiger Lake]] (Q3 2020) and newer.<ref name="9r8b9">{{Cite web|title=Intel® Celeron® 6305 Processor (4M Cache, 1.80 GHz, with IPU) Product Specifications|url=https://ark.intel.com/content/www/us/en/ark/products/208646/intel-celeron-6305-processor-4m-cache-1-80-ghz-with-ipu.html|access-date=2020-11-10|website=ark.intel.com|language=en}}</ref> * [[Advanced Micro Devices|AMD]]: ** [[Jaguar (microarchitecture)|Jaguar-based]] processors (Q2 2013) and newer Issues regarding compatibility between future Intel and AMD processors are discussed under [[XOP instruction set#Compatibility issues|XOP instruction set]]. * [[VIA Technologies|VIA]]: ** Nano QuadCore ** Eden X4 * [[Zhaoxin]]: ** WuDaoKou-based processors (KX-5000 and KH-20000) === Compiler and assembler support === * [[Absoft Fortran Compilers|Absoft]] supports with -{{Not a typo|mavx}} flag. * The [[Free Pascal]] compiler supports AVX and AVX2 with the -CfAVX and -CfAVX2 switches from version 2.7.1. * [[Delphi (software)|RAD studio]] (v11.0 Alexandria) supports AVX2 and AVX512.<ref>{{Cite web|title=What's New - RAD Studio|url=https://docwiki.embarcadero.com/RADStudio/Alexandria/en/What%27s_New#Inline_assembler_support_for_AVX_instructions_.28AVX-512.29|access-date=2021-09-17|website=docwiki.embarcadero.com}}</ref> * The [[GNU Assembler]] (GAS) inline assembly functions support these instructions (accessible via GCC), as do Intel primitives and the Intel inline assembler (closely compatible to GAS, although more general in its handling of local references within inline code). GAS supports AVX starting with binutils version 2.19.<ref name="gas-version-history">{{Cite web|title=GAS Changes|url=https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;a=blob_plain;f=gas/NEWS;hb=refs/heads/binutils-2_42-branch|website=sourceware.org|access-date=2024-05-03}}</ref> * [[GNU Compiler Collection|GCC]] starting with version 4.6 (although there was a 4.3 branch with certain support) and the Intel Compiler Suite starting with version 11.1 support AVX. * The [[Open64]] compiler version 4.5.1 supports AVX with -{{Not a typo|mavx}} flag. * [[PathScale]] supports via the -{{Not a typo|mavx}} flag. * The [[Vector Pascal]] compiler supports AVX via the -{{Not a typo|cpuAVX32}} flag. * The [[Visual Studio 2010]]/[[Visual Studio 2012|2012]] compiler supports AVX via intrinsic and {{Not a typo|/arch:AVX}} switch. * [[Netwide Assembler|NASM]] starting with version 2.03 and newer. There were numerous bug fixes and updates related to AVX in version 2.04.<ref name="nasm-version-history">{{Cite web|title=NASM - The Netwide Assembler, Appendix C: NASM Version History|url=https://nasm.us/doc/nasmdocc.html|website=nasm.us|access-date=2024-05-03}}</ref> * Other assemblers such as [[MASM]] VS2010 version, [[YASM]],<ref name="uHguP">{{cite web|title=YASM 0.7.0 Release Notes|url=http://yasm.tortall.net/releases/Release0.7.0.html|website=yasm.tortall.net}}</ref> [[FASM]] and [[JWASM]]. === Operating system support === AVX adds new register-state through the 256-bit wide YMM register file, so explicit [[operating system]] support is required to properly save and restore AVX's expanded registers between [[context switch]]es. The following operating system versions support AVX: * [[DragonFly BSD]]: support added in early 2013. * [[FreeBSD]]: support added in a patch submitted on January 21, 2012,<ref name="lSP7Y">{{citation |url= http://svnweb.freebsd.org/base?view=revision&revision=230426 |title= Add support for the extended FPU states on amd64, both for native 64bit and 32bit ABIs |publisher= svnweb.freebsd.org |date= January 21, 2012 |access-date= January 22, 2012}}</ref> which was included in the 9.1 stable release.<ref name="HIQRm">{{cite web |url= http://www.freebsd.org/releases/9.1R/announce.html |title= FreeBSD 9.1-RELEASE Announcement |access-date= May 20, 2013}}</ref> * [[Linux]]: supported since kernel version 2.6.30,<ref name="etOsK">{{citation |url= https://git.kernel.org/linus/a30469e7921a6dd2067e9e836d7787cfa0105627 |title= x86: add linux kernel support for YMM state |access-date= July 13, 2009}}</ref> released on June 9, 2009.<ref name="XB18C">{{citation |url= http://kernelnewbies.org/Linux_2_6_30 |title= Linux 2.6.30 - Linux Kernel Newbies |access-date= July 13, 2009}}</ref> * [[macOS]]: support added in 10.6.8 ([[Mac OS X Snow Leopard|Snow Leopard]]) update<ref name="3qGKK">{{citation |url= https://twitter.com/comex/status/85401002349576192 |title= Twitter |access-date= June 23, 2010}}</ref>{{Unreliable source?|date=January 2012}} released on June 23, 2011. In fact, macOS Ventura does not support processors without the AVX2 instruction set. <ref>{{cite web | url=https://arstechnica.com/gadgets/2022/08/running-macos-ventura-on-old-macs-isnt-easy-but-some-devs-are-making-progress/ | title=Devs are making progress getting macOS Ventura to run on unsupported, decade-old Macs | date=August 23, 2022 }}</ref> * [[OpenBSD]]: support added on March 21, 2015.<ref name="K5BEr">{{citation |url= http://marc.info/?l=openbsd-cvs&m=142697057503802&w=2 |title= Add support for saving/restoring FPU state using the XSAVE/XRSTOR. |access-date= March 25, 2015}}</ref> * [[Solaris (operating system)|Solaris]]: supported in Solaris 10 Update 10 and Solaris 11. * [[Microsoft Windows|Windows]]: supported in [[Windows 7]] SP1, [[Windows Server 2008 R2]] SP1,<ref name="2kEEK">{{citation |url= http://msdn.microsoft.com/en-us/library/ff545910.aspx |title= Floating-Point Support for 64-Bit Drivers |access-date= December 6, 2009}}</ref> [[Windows 8]], [[Windows 10]]. ** Windows Server 2008 R2 SP1 with Hyper-V requires a hotfix to support AMD AVX (Opteron 6200 and 4200 series) processors, [http://support.microsoft.com/kb/2568088 KB2568088] ** [[Windows XP]] and [[Windows Server 2003]] do not support AVX in both kernel drivers and user applications. == {{Anchor|AVX2}}Advanced Vector Extensions 2 == Advanced Vector Extensions 2 (AVX2), also known as '''Haswell New Instructions''',<ref name="avx2">{{citation |url= http://software.intel.com/en-us/blogs/2011/06/13/haswell-new-instruction-descriptions-now-available/ |title= Haswell New Instruction Descriptions Now Available |publisher= Software.intel.com |access-date= January 17, 2012}}</ref> is an expansion of the AVX instruction set introduced in Intel's [[Haswell (microarchitecture)|Haswell microarchitecture]]. AVX2 makes the following additions: * expansion of most vector integer SSE and AVX instructions to 256 bits * [[Gather-scatter (vector addressing)|Gather]] support, enabling vector elements to be loaded from non-contiguous memory locations * [[Word (computer architecture)#Size families|DWORD- and QWORD-]]granularity any-to-any permutes * vector shifts. Sometimes three-operand [[FMA instruction set|fused multiply-accumulate]] (FMA3) extension is considered part of AVX2, as it was introduced by Intel in the same processor microarchitecture. This is a separate extension using its own [[CPUID]] flag and is described on [[FMA instruction set#FMA3 instruction set|its own page]] and not below. === New instructions === {| class="wikitable" |- ! Instruction ! Description |- | <code>VBROADCASTSS</code>, <code>VBROADCASTSD</code> | Copy a 32-bit or 64-bit register operand to all elements of a XMM or YMM vector register. These are register versions of the same instructions in AVX1. There is no 128-bit version however, but the same effect can be simply achieved using VINSERTF128. |- | <code>VPBROADCASTB</code>, <code>VPBROADCASTW</code>, <code>VPBROADCASTD</code>, <code>VPBROADCASTQ</code> | Copy an 8, 16, 32 or 64-bit integer register or memory operand to all elements of a XMM or YMM vector register. |- | <code>VBROADCASTI128</code> | Copy a 128-bit memory operand to all elements of a YMM vector register. |- | <code>VINSERTI128</code> | Replaces either the lower half or the upper half of a 256-bit YMM register with the value of a 128-bit source operand. The other half of the destination is unchanged. |- | <code>VEXTRACTI128</code> | Extracts either the lower half or the upper half of a 256-bit YMM register and copies the value to a 128-bit destination operand. |- | <code>VGATHERDPD</code>, <code>VGATHERQPD</code>, <code>VGATHERDPS</code>, <code>VGATHERQPS</code> | [[Gather-scatter (vector addressing)|Gathers]] single or double precision floating point values using either 32 or 64-bit indices and scale. |- | <code>VPGATHERDD</code>, <code>VPGATHERDQ</code>, <code>VPGATHERQD</code>, <code>VPGATHERQQ</code> | Gathers 32 or 64-bit integer values using either 32 or 64-bit indices and scale. |- | <code>VPMASKMOVD</code>, <code>VPMASKMOVQ</code> | Conditionally reads any number of elements from a SIMD vector memory operand into a destination register, leaving the remaining vector elements unread and setting the corresponding elements in the destination register to zero. Alternatively, conditionally writes any number of elements from a SIMD vector register operand to a vector memory operand, leaving the remaining elements of the memory operand unchanged. |- | <code>VPERMPS</code>, <code>VPERMD</code> | Shuffle the eight 32-bit vector elements of one 256-bit source operand into a 256-bit destination operand, with a register or memory operand as selector. |- | <code>VPERMPD</code>, <code>VPERMQ</code> | Shuffle the four 64-bit vector elements of one 256-bit source operand into a 256-bit destination operand, with a register or memory operand as selector. |- | <code>VPERM2I128</code> | Shuffle (two of) the four 128-bit vector elements of ''two'' 256-bit source operands into a 256-bit destination operand, with an immediate constant as selector. |- | <code>VPBLENDD</code> | Doubleword immediate version of the PBLEND instructions from [[SSE4]]. |- | <code>VPSLLVD</code>, <code>VPSLLVQ</code> | Shift left logical. Allows variable shifts where each element is shifted according to the packed input. |- | <code>VPSRLVD</code>, <code>VPSRLVQ</code> | Shift right logical. Allows variable shifts where each element is shifted according to the packed input. |- | <code>VPSRAVD</code> | Shift right arithmetically. Allows variable shifts where each element is shifted according to the packed input. |} === CPUs with AVX2 === * [[Intel Corporation|Intel]] ** [[Haswell (microarchitecture)|Haswell]] processors (Q2 2013) and newer, except models branded as Celeron and Pentium. ** Celeron and Pentium branded processors starting with [[Tiger Lake (microprocessor)|Tiger Lake]] (Q3 2020) and newer.<ref name="9r8b9" /> * [[Advanced Micro Devices|AMD]] ** [[Excavator (microarchitecture)|Excavator]] processors (Q2 2015) and newer. * [[VIA Technologies|VIA]]: ** Nano QuadCore ** Eden X4 == AVX-512 == {{Main article|AVX-512}} ''AVX-512'' are 512-bit extensions to the 256-bit Advanced Vector Extensions SIMD instructions for x86 instruction set architecture proposed by [[Intel Corporation|Intel]] in July 2013, and are supported with Intel's [[Knights Landing (microarchitecture)|Knights Landing]] processor.<ref name="reinders512" /> AVX-512 instructions are encoded with the new [[EVEX prefix]]. It allows 4 operands, 8 new 64-bit [[AVX-512#Opmask registers|opmask registers]], scalar memory mode with automatic broadcast, explicit rounding control, and compressed displacement memory [[addressing mode]]. The width of the register file is increased to 512 bits and total register count increased to 32 (registers ZMM0-ZMM31) in x86-64 mode. AVX-512 consists of multiple instruction subsets, not all of which are meant to be supported by all processors implementing them. The instruction set consists of the following: * AVX-512 Foundation (F){{snd}} adds several new instructions and expands most 32-bit and 64-bit floating point SSE-SSE4.1 and AVX/AVX2 instructions with [[EVEX prefix|EVEX coding scheme]] to support the 512-bit registers, operation masks, parameter broadcasting, and embedded rounding and exception control * AVX-512 Conflict Detection Instructions (CD){{snd}} efficient conflict detection to allow more loops to be vectorized, supported by Knights Landing<ref name="reinders512" /> * AVX-512 Exponential and Reciprocal Instructions (ER){{snd}} exponential and reciprocal operations designed to help implement transcendental operations, supported by Knights Landing<ref name="reinders512" /> * AVX-512 Prefetch Instructions (PF){{snd}} new prefetch capabilities, supported by Knights Landing<ref name="reinders512" /> * AVX-512 Vector Length Extensions (VL){{snd}} extends most AVX-512 operations to also operate on XMM (128-bit) and YMM (256-bit) registers (including XMM16-XMM31 and YMM16-YMM31 in x86-64 mode)<ref name="reinders512b" /> * AVX-512 Byte and Word Instructions (BW){{snd}} extends AVX-512 to cover 8-bit and 16-bit integer operations<ref name="reinders512b">{{cite web |url= https://software.intel.com/en-us/blogs/additional-avx-512-instructions |title= Additional AVX-512 instructions |date= July 17, 2014 |author= James Reinders |publisher= [[Intel]] |access-date= August 3, 2014}}</ref> * AVX-512 Doubleword and Quadword Instructions (DQ){{snd}} enhanced 32-bit and 64-bit integer operations<ref name="reinders512b" /> * AVX-512 Integer [[Fused Multiply Add]] (IFMA){{snd}} fused multiply add for 512-bit integers.<ref name="newisa">{{cite web|url=https://software.intel.com/en-us/intel-architecture-instruction-set-extensions-programming-reference|title=Intel Architecture Instruction Set Extensions Programming Reference|publisher=[[Intel]]|format=PDF|access-date=January 29, 2014}}</ref>{{rp|746}} * AVX-512 Vector Byte Manipulation Instructions (VBMI) adds vector byte permutation instructions which are not present in AVX-512BW. * AVX-512 Vector Neural Network Instructions Word variable precision (4VNNIW){{snd}} vector instructions for deep learning. * AVX-512 Fused Multiply Accumulation Packed Single precision (4FMAPS){{snd}} vector instructions for deep learning. * VPOPCNTDQ{{snd}} count of bits set to 1.<ref name="iaiseaffpr">{{cite web|url=https://software.intel.com/en-us/intel-architecture-instruction-set-extensions-programming-reference|title=Intel® Architecture Instruction Set Extensions and Future Features Programming Reference|access-date=October 16, 2017|publisher=Intel}}</ref> * VPCLMULQDQ{{snd}} carry-less multiplication of quadwords.<ref name="iaiseaffpr" /> * AVX-512 Vector Neural Network Instructions (VNNI){{snd}} vector instructions for deep learning.<ref name="iaiseaffpr" /> * AVX-512 Galois Field New Instructions (GFNI){{snd}} vector instructions for calculating [[Galois field]].<ref name="iaiseaffpr" /> * AVX-512 Vector AES instructions (VAES){{snd}} vector instructions for [[Advanced Encryption Standard|AES]] coding.<ref name="iaiseaffpr" /> * AVX-512 Vector Byte Manipulation Instructions 2 (VBMI2){{snd}} byte/word load, store and concatenation with shift.<ref name="iaiseaffpr" /> * AVX-512 Bit Algorithms (BITALG){{snd}} byte/word [[bit manipulation]] instructions expanding VPOPCNTDQ.<ref name="iaiseaffpr" /> * AVX-512 [[Bfloat16 floating-point format|Bfloat16]] Floating-Point Instructions (BF16){{snd}} vector instructions for AI acceleration. * AVX-512 [[Half-precision floating-point format|Half-Precision]] Floating-Point Instructions (FP16){{snd}} vector instructions for operating on floating-point and complex numbers with reduced precision. Only the core extension AVX-512F (AVX-512 Foundation) is required by all implementations, though all current implementations also support CD (conflict detection). All central processors with AVX-512 also support VL, DQ and BW. The ER, PF, 4VNNIW and 4FMAPS instruction set extensions are currently only implemented in Intel computing coprocessors. The updated SSE/AVX instructions in AVX-512F use the same mnemonics as AVX versions; they can operate on 512-bit ZMM registers, and will also support 128/256 bit XMM/YMM registers (with AVX-512VL) and byte, word, doubleword and quadword integer operands (with AVX-512BW/DQ and VBMI).<ref name="newisa" />{{rp|23}} === CPUs with AVX-512 === {| class="wikitable" ! scope="col" | AVX-512 Subset ! scope="col" | F ! scope="col" | CD ! scope="col" | ER ! scope="col" | PF ! scope="col" | 4FMAPS ! scope="col" | 4VNNIW ! scope="col" | VPOPCNTDQ ! scope="col" | VL ! scope="col" | DQ ! scope="col" | BW ! scope="col" | IFMA ! scope="col" | VBMI ! scope="col" | VBMI2 ! scope="col" | BITALG ! scope="col" | VNNI ! scope="col" | BF16 ! scope="col" | VPCLMULQDQ ! scope="col" | GFNI ! scope="col" | VAES ! scope="col" | VP2INTERSECT ! scope="col" | FP16 |- ! scope="row" |Intel [[Xeon Phi#Knights Landing|Knights Landing]] (2016) |colspan="2" rowspan="9" {{Yes}} |colspan="2" rowspan="2" {{Yes}} |colspan="17" {{No}} |- ! scope="row" |Intel [[Knights Mill]] (2017) |colspan="3" {{Yes}} |colspan="14" {{No}} |- ! scope="row" |Intel [[Skylake (microarchitecture)|Skylake-SP]], [[Skylake (microarchitecture)|Skylake-X]] (2017) |colspan="4" rowspan="11" {{No}} |rowspan="4" {{No}} |colspan="3" rowspan="4" {{Yes}} |colspan="11" {{No}} |- ! scope="row" |Intel [[Cannon Lake (microarchitecture)|Cannon Lake]] (2018) |colspan="2" {{Yes}} |colspan="9" {{No}} |- ! scope="row" |Intel [[Cascade Lake (microarchitecture)|Cascade Lake-SP]] (2019) |colspan="4" {{No}} |colspan="1" {{Yes}} |colspan="6" {{No}} |- ! scope="row" |Intel [[Cooper Lake (microarchitecture)|Cooper Lake]] (2020) |colspan="4" {{No}} |colspan="2" {{Yes}} |colspan="5" {{No}} |- ! scope="row" |Intel [[Ice Lake (microarchitecture)|Ice Lake]] (2019) |colspan="9" rowspan="3" {{Yes}} |rowspan="3" {{No}} |colspan="3" rowspan="3" {{Yes}} |colspan="2" {{No}} |- !Intel [[Tiger Lake (microprocessor)|Tiger Lake]] (2020) |{{Yes}} |{{No}} |- !Intel [[Rocket Lake]] (2021) |colspan="2" {{No}} |- !Intel [[Alder Lake (microprocessor)|Alder Lake]] (2021) |colspan="2" {{Maybe|}} |colspan="15" {{Maybe|Not officially supported, but can be enabled on some motherboards with some BIOS versions{{ref|adl-avx512-note|Note 1}}}} |- !AMD [[Zen 4]] (2022) |colspan="2" rowspan="3" {{Yes}} |colspan="13" rowspan="3" {{Yes}} |colspan="2" {{No}} |- !Intel [[Sapphire Rapids (microprocessor)|Sapphire Rapids]] (2023) |{{No}} |{{Yes}} |- !AMD [[Zen 5]] (2024) | colspan="1" {{Yes}} | colspan="1" {{No}} |} <ref name="gu9Hh">{{Cite web|url=https://software.intel.com/en-us/articles/intel-software-development-emulator|title=Intel® Software Development Emulator {{!}} Intel® Software|website=software.intel.com|access-date=June 11, 2016}}</ref> {{note|adl-avx512-note|Note 1}}: AVX-512 is disabled by default in [[Alder Lake (microprocessor)|Alder Lake]] processors. On some motherboards with some BIOS versions, AVX-512 can be enabled in the BIOS, but this requires disabling E-cores.<ref>{{cite web|url=https://www.anandtech.com/show/17047/the-intel-12th-gen-core-i912900k-review-hybrid-performance-brings-hybrid-complexity/2|title=The Intel 12th Gen Core i9-12900K Review: Hybrid Performance Brings Hybrid Complexity|website=AnandTech|first1=Ian|last1=Cutress|first2=Andrei|last2=Frumusanu|language=en|access-date=2021-11-05}}</ref> However, Intel has begun fusing AVX-512 off on newer Alder Lake processors.<ref>{{Cite web |first=Paul |last=Alcorn |date=2022-03-02 |title=Intel Nukes Alder Lake's AVX-512 Support, Now Fuses It Off in Silicon |url=https://www.tomshardware.com/news/intel-nukes-alder-lake-avx-512-now-fuses-it-off-in-silicon |access-date=2022-10-03 |website=Tom's Hardware |language=en}}</ref> === Compilers supporting AVX-512 === * [[GNU Compiler Collection|GCC]] 4.9 and newer<ref name="ENvJZ">{{Cite web|url=https://gcc.gnu.org/gcc-4.9/changes.html#x86|title=GCC 4.9 Release Series — Changes, New Features, and Fixes{{snd}} GNU Project - Free Software Foundation (FSF)|website=gcc.gnu.org|language=en|access-date=April 3, 2017}}</ref> * [[Clang]] 3.9 and newer<ref name="MLilb">{{Cite web|url=http://releases.llvm.org/3.9.0/docs/ReleaseNotes.html#id10|title=LLVM 3.9 Release Notes — LLVM 3.9 documentation|website=releases.llvm.org|access-date=April 3, 2017}}</ref> * [[Intel C++ Compiler|ICC]] 15.0.1 and newer<ref name="ZZVZ5">{{Cite web|url=https://software.intel.com/en-us/articles/intel-parallel-studio-xe-2015-composer-edition-c-release-notes|title=Intel® Parallel Studio XE 2015 Composer Edition C++ Release Notes {{!}} Intel® Software|website=software.intel.com|language=en|access-date=April 3, 2017}}</ref> * Microsoft Visual Studio 2017 C++ Compiler<ref name="pZckG">{{Cite web|url=https://blogs.msdn.microsoft.com/vcblog/2017/07/11/microsoft-visual-studio-2017-supports-intel-avx-512/|language=en|title=Microsoft Visual Studio 2017 Supports Intel® AVX-512|date=July 11, 2017 }}</ref> * [[Netwide Assembler|NASM]] 2.11 and newer<ref name="nasm-version-history"/> == AVX-VNNI, AVX-IFMA{{anchor|AVX-VNNI|AVX-IFMA}} == AVX-VNNI is a [[VEX prefix|VEX]]-coded variant of the [[AVX-512#VNNI|AVX512-VNNI]] instruction set extension. Similarly, AVX-IFMA is a [[VEX prefix|VEX]]-coded variant of [[AVX-512#IFMA|AVX512-IFMA]]. These extensions provide the same sets of operations as their AVX-512 counterparts, but are limited to 256-bit vectors and do not support any additional features of [[EVEX prefix|EVEX]] encoding, such as broadcasting, opmask registers or accessing more than 16 vector registers. These extensions allow support of VNNI and IFMA operations even when full [[AVX-512]] support is not implemented in the processor. === CPUs with AVX-VNNI === * [[Intel Corporation|Intel]] ** [[Alder Lake (microprocessor)|Alder Lake]] processors (Q4 2021) and newer. * [[AMD]] ** [[Zen 5]] processors (H2 2024) and newer.<ref>{{Cite web |title=AMD Zen 5 Compiler Support Posted For GCC - Confirms New AVX Features & More |url=https://www.phoronix.com/news/AMD-Zen-5-Znver-5-GCC |access-date=2024-02-10 |website=www.phoronix.com |language=en}}</ref> === CPUs with AVX-IFMA === * [[Intel Corporation|Intel]] ** [[Sierra Forest]] E-core-only Xeon processors (H2 2024) and newer. ** Grand Ridge special-purpose processors and newer. ** [[Meteor Lake]] mobile processors (Q4 2023) and newer. ** [[Arrow Lake (microprocessor)|Arrow Lake]] desktop processors and newer. == AVX10 == AVX10, announced in August 2023, is a new, "converged" AVX instruction set. It addresses several issues of AVX-512, in particular that it is split into too many parts<ref>{{Cite web|last=Mann|first=Tobias|title=Intel's AVX10 promises benefits of AVX-512 without baggage|url=https://www.theregister.com/2023/08/15/avx10_intel_interviews/|date=2023-08-15|access-date=2023-08-20|website=www.theregister.com|language=en}}</ref> (20 feature flags) and that it makes 512-bit vectors mandatory to support. AVX10 presents a simplified CPUID interface to test for instruction support, consisting of the AVX10 version number (indicating the set of instructions supported, with later versions always being a superset of an earlier one) and the available maximum vector length (256 or 512 bits).<ref name=avx10-wp>{{cite web |title=The Converged Vector ISA: Intel® Advanced Vector Extensions 10 Technical Paper |url=https://www.intel.com/content/www/us/en/content-details/784343/the-converged-vector-isa-intel-advanced-vector-extensions-10-technical-paper.html |website=Intel}}</ref> A combined notation is used to indicate the version and vector length: for example, AVX10.2/256 indicates that a CPU is capable of the second version of AVX10 with a maximum vector width of 256 bits.<ref name=avx10-spec/> The first and "early" version of AVX10, notated AVX10.1, will ''not'' introduce any instructions or encoding features beyond what is already in AVX-512 (specifically, in Intel [[Sapphire Rapids (microprocessor)|Sapphire Rapids]]: AVX-512F, CD, VL, DQ, BW, IFMA, VBMI, VBMI2, BITALG, VNNI, GFNI, VPOPCNTDQ, VPCLMULQDQ, VAES, BF16, FP16). The second and "fully-featured" version, AVX10.2, introduces new features such as YMM embedded rounding and Suppress All Exception. For CPUs supporting AVX10 and 512-bit vectors, all legacy AVX-512 feature flags will remain set to facilitate applications supporting AVX-512 to continue using AVX-512 instructions.<ref name=avx10-spec>{{cite web |title=Intel® Advanced Vector Extensions 10 (Intel® AVX10) Architecture Specification |url=https://www.intel.com/content/www/us/en/content-details/784267/intel-advanced-vector-extensions-10-intel-avx10-architecture-specification.html |website=Intel}}</ref> AVX10.1/512 will be available on [[Granite Rapids]].<ref name=avx10-spec/> == APX == {{main|X86#APX (Advanced Performance Extensions)}} APX is a new extension. It is not focused on vector computation, but provides RISC-like extensions to the x86-64 architecture by doubling the number of general purpose registers to 32 and introducing three-operand instruction formats. AVX is only tangentially affected as APX introduces extended operands.<ref>{{cite web |title=Intel® Advanced Performance Extensions (Intel® APX) Architecture Specification |url=https://www.intel.com/content/www/us/en/content-details/784266/intel-advanced-performance-extensions-intel-apx-architecture-specification.html?wapkw=apx |publisher=Intel}}</ref><ref>{{Cite web|last=Robinson|first=Dan|title=Intel discloses x86 and vector instructions for future chips|url=https://www.theregister.com/2023/07/26/intel_x86_vector_instructions/|date=2023-07-26|access-date=2023-08-20|website=www.theregister.com|language=en}}</ref> == Applications == * Suitable for [[floating point]]-intensive calculations in multimedia, scientific and financial applications (AVX2 adds support for [[integer]] operations). * Increases parallelism and throughput in floating point [[SIMD]] calculations. * Reduces register load due to the non-destructive instructions. * Improves Linux RAID software performance (required AVX2, AVX is not sufficient)<ref name="K5FHF">{{Cite web |url= https://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=2c935842bdb46f5f557426feb4d2bdfdad1aa5f9 |archive-url= https://archive.today/20130415065954/https://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=2c935842bdb46f5f557426feb4d2bdfdad1aa5f9 |url-status= dead |archive-date= April 15, 2013 |title= Linux RAID |date= February 17, 2013 |publisher= LWN}}</ref> === Software === * [[Blender (software)|Blender]] uses AVX, AVX2 and AVX-512 in the Cycles render engine.<ref>{{cite web |last1=Jaroš |first1=Milan |last2=Strakoš |first2=Petr |last3=Říha |first3=Lubomír |title=Rendering in Blender using AVX-512 Vectorization |url=https://www.ixpug.org/documents/1520629330Jaros-IXPUG-CINECABlender5.pdf |website=Intel eXtreme Performance Users Group |publisher=[[VSB – Technical University of Ostrava|Technical University of Ostrava]] |access-date=28 October 2022 |language=en |date=28 May 2022 }}</ref> * Bloombase uses AVX, AVX2 and AVX-512 in their Bloombase Cryptographic Module (BCM). * [[Botan (programming library)|Botan]] uses both AVX and AVX2 when available to accelerate some algorithms, like ChaCha. * [[BSAFE]] C toolkits uses AVX and AVX2 where appropriate to accelerate various cryptographic algorithms.<ref>{{cite web|url=https://www.dell.com/support/kbdoc/000190141/comparison-of-bsafe-cryptographic-library-implementations|title=Comparison of BSAFE cryptographic library implementations|date=July 25, 2023}}</ref> * [[Crypto++]] uses both AVX and AVX2 when available to accelerate some algorithms, like Salsa and ChaCha. * [[Esri]] [[ArcGIS]] Data Store uses AVX2 for graph storage.<ref>{{Cite web |title=ArcGIS Data Store 11.2 System Requirements |url=https://enterprise.arcgis.com/en/system-requirements/latest/windows/arcgis-data-store-system-requirements.htm#ESRI_SECTION1_40B68391470C40E0B06DFA2A4412C6E7 |access-date=2024-01-24 |website=ArcGIS Enterprise}}</ref> * [[OpenSSL]] uses AVX- and AVX2-optimized cryptographic functions since version 1.0.2.<ref name="04fXZ">{{Cite web |title=Improving OpenSSL Performance|date=May 26, 2015|access-date=February 28, 2017|url=https://software.intel.com/en-us/articles/improving-openssl-performance}}</ref> Support for AVX-512 was added in version 3.0.0.<ref>{{cite web|url=https://github.com/openssl/openssl/blob/openssl-3.0.0/CHANGES.md|title=OpenSSL 3.0.0 release notes|website=[[GitHub]] |date=September 7, 2021}}</ref> Some of these optimizations are also present in various clones and forks, like LibreSSL. * [[Prime95]]/MPrime, the software used for [[GIMPS]], started using the AVX instructions since version 27.1, AVX2 since 28.6 and AVX-512 since 29.1.<ref>{{cite web|url=https://www.mersenne.org/download/whatsnew_308b15.txt|title=Prime95 release notes|access-date=2022-07-10}}</ref> * [https://code.videolan.org/videolan/dav1d dav1d] [[AV1]] decoder can use AVX2 and AVX-512 on supported CPUs.<ref name="O9KLK">{{Cite web |title=dav1d: performance and completion of the first release|date=November 21, 2018|access-date=November 22, 2018|url=http://www.jbkempf.com/blog/post/2018/dav1d-toward-the-first-release}}</ref><ref>{{cite web|url=https://code.videolan.org/videolan/dav1d/-/tags/0.6.0|title=dav1d 0.6.0 release notes|date=March 6, 2020}}</ref> * [https://gitlab.com/AOMediaCodec/SVT-AV1 SVT-AV1] [[AV1]] encoder can use AVX2 and AVX-512 to accelerate video encoding.<ref>{{cite web|url=https://gitlab.com/AOMediaCodec/SVT-AV1/-/blob/078d27e0bdc52cad2dc28e630fdd6391857e044e/CHANGELOG.md#070-2019-09-26|title=SVT-AV1 0.7.0 release notes|date=September 26, 2019}}</ref> * [[dnetc]], the software used by [[distributed.net]], has an AVX2 core available for its RC5 project and will soon release one for its OGR-28 project. * [[Einstein@Home]] uses AVX in some of their distributed applications that search for [[gravitational waves]].<ref name="GLExU">{{Cite web |title= Einstein@Home Applications | url= http://einsteinathome.org/apps.php}}</ref> * [[Folding@home]] uses AVX on calculation cores implemented with [[GROMACS]] library. * Helios uses AVX and AVX2 hardware acceleration on 64-bit x86 hardware.<ref>{{Cite web|title=FAQ, Helios|url=https://heliosmusic.io/faq/|access-date=2021-07-05|website=Helios|language=en-US}}</ref> * [[Horizon: Zero Dawn]] uses AVX in its Decima game engine. * [[PCSX2]] and [[RPCS3]], are open source [[PlayStation 2|PS2]] and [[PlayStation 3|PS3]] emulators respectively that use AVX2 and [[AVX-512]] instructions to emulate games. * [[Network Device Interface]], an IP video/audio protocol developed by NewTek for live broadcast production, uses AVX and AVX2 for increased performance. * [[TensorFlow]] since version 1.6 and tensorflow above versions requires CPU supporting at least AVX.<ref name="MDJ95">{{Cite web |title= Tensorflow 1.6 | website= [[GitHub]]| url= https://github.com/tensorflow/tensorflow/releases/tag/v1.6.0}}</ref> * [[x264]], [[x265]] and [[Versatile Video Coding|VTM]] video encoders can use AVX2 or AVX-512 to speed up encoding. * Various CPU-based [[cryptocurrency]] miners (like pooler's cpuminer for [[Bitcoin]] and [[Litecoin]]) use AVX and AVX2 for various cryptography-related routines, including [[SHA-256]] and [[scrypt]]. * [[libsodium]] uses AVX in the implementation of scalar multiplication for [[Curve25519]] and [[Ed25519]] algorithms, AVX2 for [[BLAKE2b]], [[Salsa20]], [[ChaCha20]], and AVX2 and AVX-512 in implementation of [[Argon2]] algorithm. * [[libvpx]] open source reference implementation of VP8/VP9 encoder/decoder, uses AVX2 or AVX-512 when available. * [[libjpeg-turbo]] uses AVX2 to accelerate image processing. * [[FFTW]] can utilize AVX, AVX2 and AVX-512 when available. * LLVMpipe, a software OpenGL renderer in [[Mesa (computer graphics)|Mesa]] using Gallium and [[LLVM]] infrastructure, uses AVX2 when available. * [[glibc]] uses AVX2 (with [[FMA instruction set|FMA]]) and AVX-512 for optimized implementation of various mathematical (i.e. <code>expf</code>, <code>sinf</code>, <code>powf</code>, <code>atanf</code>, <code>atan2f</code>) and string (<code>memmove</code>, <code>memcpy</code>, etc.) functions in [[libc]]. * [[Linux kernel]] can use AVX or AVX2, together with [[AES instruction set#x86 architecture processors|AES-NI]] as optimized implementation of [[AES-GCM]] cryptographic algorithm. * [[Linux kernel]] uses AVX or AVX2 when available, in optimized implementation of multiple other cryptographic ciphers: [[Camellia (cipher)|Camellia]], [[CAST5]], [[CAST6]], [[Serpent (cipher)|Serpent]], [[Twofish]], [[MORUS-1280]], and other primitives: [[Poly1305]], [[SHA-1]], [[SHA-256]], [[SHA-512]], [[ChaCha20]]. * POCL, a portable Computing Language, that provides implementation of [[OpenCL]], makes use of AVX, AVX2 and AVX-512 when possible. * [[.NET]] and [[.NET Framework]] can utilize AVX, AVX2 through the generic <code>System.Numerics.Vectors</code> namespace. * [[.NET Core]], starting from version 2.1 and more extensively after version 3.0 can directly use all AVX, AVX2 intrinsics through the <code>System.Runtime.Intrinsics.X86</code> namespace. * [[EmEditor]] 19.0 and above uses AVX2 to speed up processing.<ref name="L6yrb">[https://www.emeditor.com/text-editor-features/history/new-in-version-19-0/ New in Version 19.0 – EmEditor (Text Editor)]</ref> * Native Instruments' Massive X softsynth requires AVX.<ref name="BUArc">{{Cite web|url=http://support.native-instruments.com/hc/en-us/articles/360001544678-MASSIVE-X-Requires-AVX-Compatible-Processor|title=MASSIVE X Requires AVX Compatible Processor|website=Native Instruments|language=en-US|access-date=2019-11-29}}</ref> * [[Microsoft Teams]] uses AVX2 instructions to create a blurred or custom background behind video chat participants,<ref name="nSmsk">{{Cite web|url=https://docs.microsoft.com/en-us/microsoftteams/hardware-requirements-for-the-teams-app|title=Hardware requirements for Microsoft Teams|website=Microsoft|language=en-US|access-date=2020-04-17}}</ref> and for background noise suppression.<ref name="6JbHG">{{cite web |title=Reduce background noise in Teams meetings |url=https://support.microsoft.com/en-gb/office/reduce-background-noise-in-teams-meetings-1a9c6819-137d-4b3b-a1c8-4ab20b234c0d |website=Microsoft Support |access-date=5 January 2021}}</ref> * [[Pale Moon]] custom Windows builds greatly increase browsing speed due to the use of AVX2. * [https://github.com/simdjson/simdjson {{Not a typo|simdjson}}], a [[JSON]] parsing library, uses AVX2 and AVX-512 to achieve improved decoding speed.<ref name="33car">{{cite journal |last1=Langdale |first1=Geoff |last2=Lemire |first2=Daniel |arxiv=1902.08318 |title=Parsing Gigabytes of JSON per Second |journal=The VLDB Journal |date=2019 |volume=28 |issue=6 |pages=941–960 |doi=10.1007/s00778-019-00578-5 |s2cid=67856679}}</ref><ref>{{cite web|url=https://github.com/simdjson/simdjson/releases/tag/v2.1.0|title={{Not a typo|simdjson}} 2.1.0 release notes|website=[[GitHub]] |date=June 30, 2022}}</ref> * [https://github.com/intel/x86-simd-sort x86-simd-sort], a library with sorting algorithms for 16, 32 and 64-bit numeric data types, uses AVX2 and AVX-512. The library is used in [[NumPy]] and [[OpenJDK]] to accelerate sorting algorithms.<ref>{{cite web|url=https://www.phoronix.com/news/Intel-x86-simd-sort-3.0|last=Larabel|first=Michael|website=Phoronix|title=OpenJDK Merges Intel's x86-simd-sort For Speeding Up Data Sorting 7~15x|date=October 6, 2023}}</ref> * [https://github.com/zlib-ng/zlib-ng zlib-ng], an optimized version of [[zlib]], contains AVX2 and AVX-512 versions of some data compression algorithms. * [[Tesseract (software)|Tesseract OCR]] engine uses AVX, AVX2 and AVX-512 to accelerate character recognition.<ref>{{cite web|url=https://www.phoronix.com/scan.php?page=news_item&px=Tesseract-OCR-5.2-Released|last=Larabel|first=Michael|website=Phoronix|title=Tesseract OCR 5.2 Engine Finds Success With AVX-512F|date=July 7, 2022}}</ref> == Downclocking == Since AVX instructions are wider, they consume more power and generate more heat. Executing heavy AVX instructions at high CPU clock frequencies may affect CPU stability due to excessive [[voltage droop]] during load transients. Some Intel processors have provisions to reduce the [[Intel Turbo Boost|Turbo Boost]] frequency limit when such instructions are being executed. This reduction happens even if the CPU hasn't reached its thermal and power consumption limits. On [[Skylake (microarchitecture)|Skylake]] and its derivatives, the throttling is divided into three levels:<ref name="lic">{{cite web |last1=Lemire |first1= Daniel |title=AVX-512: when and how to use these new instructions |url=https://lemire.me/blog/2018/09/07/avx-512-when-and-how-to-use-these-new-instructions/ |website=Daniel Lemire's blog|date= September 7, 2018 }}</ref><ref name="q41V7">{{cite web |author1=BeeOnRope |title=SIMD instructions lowering CPU frequency |url=https://stackoverflow.com/a/56861355 |website=Stack Overflow}}</ref> * L0 (100%): The normal turbo boost limit. * L1 (~85%): The "AVX boost" limit. Soft-triggered by 256-bit "heavy" (floating-point unit: FP math and integer multiplication) instructions. Hard-triggered by "light" (all other) 512-bit instructions. * L2 (~60%):{{Dubious|Re: Downclocking|date=July 2021}} The "AVX-512 boost" limit. Soft-triggered by 512-bit heavy instructions. The frequency transition can be soft or hard. Hard transition means the frequency is reduced as soon as such an instruction is spotted; soft transition means that the frequency is reduced only after reaching a threshold number of matching instructions. The limit is per-thread.<ref name="lic" /> In [[Ice Lake (microarchitecture)|Ice Lake]], only two levels persist:<ref name="icl-avx512-freq">{{cite web |last1=Downs |first1=Travis |title=Ice Lake AVX-512 Downclocking |url=https://travisdowns.github.io/blog/2020/08/19/icl-avx512-freq.html |website=Performance Matters blog|date=August 19, 2020 }}</ref> * L0 (100%): The normal turbo boost limit. * L1 (~97%): Triggered by any 512-bit instructions, but only when single-core boost is active; not triggered when multiple cores are loaded. [[Rocket Lake]] processors do not trigger frequency reduction upon executing any kind of vector instructions regardless of the vector size.<ref name="icl-avx512-freq" /> However, downclocking can still happen due to other reasons, such as reaching thermal and power limits. Downclocking means that using AVX in a mixed workload with an Intel processor can incur a frequency penalty. Avoiding the use of wide and heavy instructions help minimize the impact in these cases. AVX-512VL allows for using 256-bit or 128-bit operands in AVX-512 instructions, making it a sensible default for mixed loads.<ref name="LttMf">{{cite web |title=x86 - AVX 512 vs AVX2 performance for simple array processing loops |url=https://stackoverflow.com/a/52543573 |website=Stack Overflow}}</ref> On supported and unlocked variants of processors that down-clock, the clock ratio reduction offsets (typically called AVX and AVX-512 offsets) are adjustable and may be turned off entirely (set to 0x) via Intel's Overclocking / Tuning utility or in BIOS if supported there.<ref name="intelXTUGuide">{{cite web |title=Intel® Extreme Tuning Utility (Intel® XTU) Guide to Overclocking : Advanced Tuning |url=https://www.intel.com/content/www/us/en/gaming/resources/overclocking-xtu-guide.html#articleparagraph-14 |website=Intel |access-date=18 July 2021 |language=en |quote=See image in linked section, where AVX2 ratio has been set to 0.}}</ref> == See also == * [[F16C]] instruction set extension * [[Memory Protection Extensions]] * [[AArch64#Scalable Vector Extension (SVE)|Scalable Vector Extension for ARM]] - a new vector instruction set (supplementing [[VFP (instruction set)|VFP]] and [[NEON (instruction set)|NEON]]) similar to AVX-512, with some additional features. == References == {{reflist|1=30em}} == External links == * [https://software.intel.com/sites/landingpage/IntrinsicsGuide/ Intel Intrinsics Guide] * [https://docs.oracle.com/cd/E36784_01/pdf/E36859.pdf x86 Assembly Language Reference Manual] {{AMD technology}} {{Intel technology}} {{Multimedia extensions|state=uncollapsed}} [[Category:X86 instructions]] [[Category:SIMD computing]] [[Category:AMD technologies]]'
New page wikitext, after the edit ($1) (new_wikitext)
'{{Short description|Extensions to the x86 instruction set architecture for microprocessors from Intel and AMD}} {{Use mdy dates|date=September 2018}} '''Advanced Vector Extensions''' ('''AVX''', also known as '''Gesher New Instructions''' and then '''Sandy Bridge New Instructions''') are [[Single instruction, multiple data|SIMD]] extensions to the [[x86]] [[instruction set architecture]] for [[microprocessor]]s from [[Intel]] and [[Advanced Micro Devices]] (AMD). They were proposed by Intel in March 2008 and first supported by Intel with the [[Sandy Bridge]]<ref name="If1e5">{{Cite web|url=http://www.realworldtech.com/sandy-bridge/6/|title=Intel's Sandy Bridge Microarchitecture|last=Kanter|first=David|date=September 25, 2010|website=www.realworldtech.com|language=en-US|access-date=February 17, 2018}}</ref> processor shipping in Q1 2011 and later by AMD with the [[Bulldozer (microarchitecture)|Bulldozer]]<ref name="TSJKX">{{Cite news|url=https://www.extremetech.com/computing/100583-analyzing-bulldozers-scaling-single-thread-performance/4|title=Analyzing Bulldozer: Why AMD's chip is so disappointing - Page 4 of 5 - ExtremeTech|last=Hruska|first=Joel|date=October 24, 2011|work=ExtremeTech|access-date=February 17, 2018|language=en-US}}</ref> processor shipping in Q3 2011. AVX provides new features, new instructions, and a new coding scheme. '''AVX2''' (also known as '''Haswell New Instructions''') expands most integer commands to 256 bits and introduces new instructions. They were first supported by Intel with the [[Haswell (microarchitecture)|Haswell]] processor, which shipped in 2013. [[AVX-512]] expands AVX to 512-bit support using a new [[EVEX prefix]] encoding proposed by Intel in July 2013 and first supported by Intel with the [[Knights Landing (microarchitecture)|Knights Landing]] co-processor, which shipped in 2016.<ref name="reinders512">{{citation |url= http://software.intel.com/en-us/blogs/2013/avx-512-instructions |title= AVX-512 Instructions |date= July 23, 2013 |author= James Reinders |publisher= [[Intel]] |access-date= August 20, 2013}}</ref><ref name="8JgOG">{{Cite news|url=http://ark.intel.com/products/94033/Intel-Xeon-Phi-Processor-7210-16GB-1_30-GHz-64-core|title=Intel Xeon Phi Processor 7210 (16GB, 1.30 GHz, 64 core) Product Specifications|access-date=March 16, 2018|newspaper=Intel ARK (Product Specs)}}</ref> In conventional processors, AVX-512 was introduced with [[Skylake (microarchitecture)|Skylake]] server and HEDT processors in 2017. == {{Anchor|AVX1}}Advanced Vector Extensions == AVX uses sixteen YMM registers to perform a single instruction on multiple pieces of data (see [[SIMD]]). Each YMM register can hold and do simultaneous operations (math) on: * eight 32-bit single-precision floating point numbers or * four 64-bit double-precision floating point numbers. The width of the SIMD registers is increased from 128 bits to 256 bits, and renamed from XMM0–XMM7 to YMM0–YMM7 (in [[x86-64]] mode, from XMM0–XMM15 to YMM0–YMM15). The legacy [[Streaming SIMD Extensions|SSE]] instructions can be still utilized via the [[VEX prefix]] to operate on the lower 128 bits of the YMM registers. {| class="wikitable floatright" style="margin-left: 1rem; text-align: center; line-height: normal" |+ style="margin-bottom: 0.2em; font-size: small;" | AVX-512 register scheme as extension from the AVX (YMM0-YMM15) and SSE (XMM0-XMM15) registers |- | style="width: 50%; border: none; font-size: xx-small;" | <span style="float: left">511</span> <span style="float: right">256</span> | style="width: 25%; border: none; font-size: xx-small;" | <span style="float: left">255</span> <span style="float: right">128</span> | style="width: 25%; border: none; font-size: xx-small;" | <span style="float: left">127</span> <span style="float: right">0</span> |- | style="border-top: none" | | style="border-top: none" | | style="border-top: none" | |- | style="padding: 0" | &nbsp;&nbsp;ZMM0&nbsp;&nbsp; | style="padding: 0; background: #ddd" | &nbsp;&nbsp;YMM0&nbsp;&nbsp; | style="padding: 0; background: #ccc" | &nbsp;&nbsp;XMM0&nbsp;&nbsp; |- | style="padding: 0" | ZMM1 | style="padding: 0; background: #ddd" | YMM1 | style="padding: 0; background: #ccc" | XMM1 |- | style="padding: 0" | ZMM2 | style="padding: 0; background: #ddd" | YMM2 | style="padding: 0; background: #ccc" | XMM2 |- | style="padding: 0" | ZMM3 | style="padding: 0; background: #ddd" | YMM3 | style="padding: 0; background: #ccc" | XMM3 |- | style="padding: 0" | ZMM4 | style="padding: 0; background: #ddd" | YMM4 | style="padding: 0; background: #ccc" | XMM4 |- | style="padding: 0" | ZMM5 | style="padding: 0; background: #ddd" | YMM5 | style="padding: 0; background: #ccc" | XMM5 |- | style="padding: 0" | ZMM6 | style="padding: 0; background: #ddd" | YMM6 | style="padding: 0; background: #ccc" | XMM6 |- | style="padding: 0" | ZMM7 | style="padding: 0; background: #ddd" | YMM7 | style="padding: 0; background: #ccc" | XMM7 |- | style="padding: 0" | ZMM8 | style="padding: 0; background: #ddd" | YMM8 | style="padding: 0; background: #ccc" | XMM8 |- | style="padding: 0" | ZMM9 | style="padding: 0; background: #ddd" | YMM9 | style="padding: 0; background: #ccc" | XMM9 |- | style="padding: 0" | ZMM10 | style="padding: 0; background: #ddd" | YMM10 | style="padding: 0; background: #ccc" | XMM10 |- | style="padding: 0" | ZMM11 | style="padding: 0; background: #ddd" | YMM11 | style="padding: 0; background: #ccc" | XMM11 |- | style="padding: 0" | ZMM12 | style="padding: 0; background: #ddd" | YMM12 | style="padding: 0; background: #ccc" | XMM12 |- | style="padding: 0" | ZMM13 | style="padding: 0; background: #ddd" | YMM13 | style="padding: 0; background: #ccc" | XMM13 |- | style="padding: 0" | ZMM14 | style="padding: 0; background: #ddd" | YMM14 | style="padding: 0; background: #ccc" | XMM14 |- | style="padding: 0" | ZMM15 | style="padding: 0; background: #ddd" | YMM15 | style="padding: 0; background: #ccc" | XMM15 |- | style="padding: 0" | ZMM16 | style="padding: 0" | YMM16 | style="padding: 0" | XMM16 |- | style="padding: 0" | ZMM17 | style="padding: 0" | YMM17 | style="padding: 0" | XMM17 |- | style="padding: 0" | ZMM18 | style="padding: 0" | YMM18 | style="padding: 0" | XMM18 |- | style="padding: 0" | ZMM19 | style="padding: 0" | YMM19 | style="padding: 0" | XMM19 |- | style="padding: 0" | ZMM20 | style="padding: 0" | YMM20 | style="padding: 0" | XMM20 |- | style="padding: 0" | ZMM21 | style="padding: 0" | YMM21 | style="padding: 0" | XMM21 |- | style="padding: 0" | ZMM22 | style="padding: 0" | YMM22 | style="padding: 0" | XMM22 |- | style="padding: 0" | ZMM23 | style="padding: 0" | YMM23 | style="padding: 0" | XMM23 |- | style="padding: 0" | ZMM24 | style="padding: 0" | YMM24 | style="padding: 0" | XMM24 |- | style="padding: 0" | ZMM25 | style="padding: 0" | YMM25 | style="padding: 0" | XMM25 |- | style="padding: 0" | ZMM26 | style="padding: 0" | YMM26 | style="padding: 0" | XMM26 |- | style="padding: 0" | ZMM27 | style="padding: 0" | YMM27 | style="padding: 0" | XMM27 |- | style="padding: 0" | ZMM28 | style="padding: 0" | YMM28 | style="padding: 0" | XMM28 |- | style="padding: 0" | ZMM29 | style="padding: 0" | YMM29 | style="padding: 0" | XMM29 |- | style="padding: 0" | ZMM30 | style="padding: 0" | YMM30 | style="padding: 0" | XMM30 |- | style="padding: 0" | ZMM31 | style="padding: 0" | YMM31 | style="padding: 0" | XMM31 |} AVX introduces a three-operand SIMD instruction format called [[VEX coding scheme]], where the destination register is distinct from the two source operands. For example, an [[Streaming SIMD Extensions|SSE]] instruction using the conventional two-operand form {{nowrap|''a'' ← ''a'' + ''b''}} can now use a non-destructive three-operand form {{nowrap|''c'' ← ''a'' + ''b''}}, preserving both source operands. Originally, AVX's three-operand format was limited to the instructions with SIMD operands (YMM), and did not include instructions with general purpose registers (e.g. EAX). It was later used for coding new instructions on general purpose registers in later extensions, such as [[bit manipulation instruction set|BMI]]. VEX coding is also used for instructions operating on the k0-k7 mask registers that were introduced with [[AVX-512]]. The [[data structure alignment|alignment]] requirement of SIMD memory operands is relaxed.<ref name="intel_vol1">{{cite book|title=Intel 64 and IA-32 Architectures Software Developer's Manual Volume 1: Basic Architecture|publisher=Intel Corporation|page=349|edition=-051US|url=http://www.intel.com/content/dam/www/public/us/en/documents/manuals/64-ia-32-architectures-software-developer-vol-1-manual.pdf|access-date=August 23, 2014|chapter=14.9|quote=Memory arguments for most instructions with VEX prefix operate normally without causing #GP(0) on any byte-granularity alignment (unlike Legacy SSE instructions).}}</ref> Unlike their non-VEX coded counterparts, most VEX coded vector instructions no longer require their memory operands to be aligned to the vector size. Notably, the <code>VMOVDQA</code> instruction still requires its memory operand to be aligned. The new [[VEX coding scheme]] introduces a new set of code prefixes that extends the [[opcode]] space, allows instructions to have more than two operands, and allows SIMD vector registers to be longer than 128 bits. The VEX prefix can also be used on the legacy SSE instructions giving them a three-operand form, and making them interact more efficiently with AVX instructions without the need for <code>VZEROUPPER</code> and <code>VZEROALL</code>. The AVX instructions support both 128-bit and 256-bit SIMD. The 128-bit versions can be useful to improve old code without needing to widen the vectorization, and avoid the penalty of going from SSE to AVX, they are also faster on some early AMD implementations of AVX. This mode is sometimes known as AVX-128.<ref name="XEwSu">{{cite web|url=https://gcc.gnu.org/onlinedocs/gcc-4.8.2/gcc/i386-and-x86-64-Options.html#i386-and-x86-64-Options|title=i386 and x86-64 Options - Using the GNU Compiler Collection (GCC)|access-date=February 9, 2014}}</ref> === New instructions === These AVX instructions are in addition to the ones that are 256-bit extensions of the legacy 128-bit SSE instructions; most are usable on both 128-bit and 256-bit operands. {| class="wikitable" |- ! Instruction ! Description |- | <code>VBROADCASTSS</code>, <code>VBROADCASTSD</code>, <code>VBROADCASTF128</code> | Copy a 32-bit, 64-bit or 128-bit memory operand to all elements of a XMM or YMM vector register. |- | <code>VINSERTF128</code> | Replaces either the lower half or the upper half of a 256-bit YMM register with the value of a 128-bit source operand. The other half of the destination is unchanged. |- | <code>VEXTRACTF128</code> | Extracts either the lower half or the upper half of a 256-bit YMM register and copies the value to a 128-bit destination operand. |- | <code>VMASKMOVPS</code>, <code>VMASKMOVPD</code> | Conditionally reads any number of elements from a SIMD vector memory operand into a destination register, leaving the remaining vector elements unread and setting the corresponding elements in the destination register to zero. Alternatively, conditionally writes any number of elements from a SIMD vector register operand to a vector memory operand, leaving the remaining elements of the memory operand unchanged. On the AMD Jaguar processor architecture, this instruction with a memory source operand takes more than 300 clock cycles when the mask is zero, in which case the instruction should do nothing. This appears to be a design flaw.<ref name="wNTlH">{{cite web |url=http://www.agner.org/optimize/microarchitecture.pdf |title=The microarchitecture of Intel, AMD and VIA CPUs: An optimization guide for assembly programmers and compiler makers |access-date=October 17, 2016}}</ref> |- | <code>VPERMILPS</code>, <code>VPERMILPD</code> | Permute In-Lane. Shuffle the 32-bit or 64-bit vector elements of one input operand. These are in-lane 256-bit instructions, meaning that they operate on all 256 bits with two separate 128-bit shuffles, so they can not shuffle across the 128-bit lanes.<ref name="sK2vQ">{{cite web |url=https://chessprogramming.wikispaces.com/AVX2 |title=Chess programming AVX2 |access-date=October 17, 2016 |archive-date=July 10, 2017 |archive-url=https://web.archive.org/web/20170710034021/http://chessprogramming.wikispaces.com/AVX2 |url-status=dead}}</ref> |- | <code>VPERM2F128</code> | Shuffle the four 128-bit vector elements of two 256-bit source operands into a 256-bit destination operand, with an immediate constant as selector. |- | <code>VTESTPS</code>, <code>VTESTPD</code> | Packed bit test of the packed single-precision or double-precision floating-point sign bits, setting or clearing the ZF flag based on AND and CF flag based on ANDN. |- | <code>VZEROALL</code> | Set all YMM registers to zero and tag them as unused. Used when switching between 128-bit use and 256-bit use. |- | <code>VZEROUPPER</code> | Set the upper half of all YMM registers to zero. Used when switching between 128-bit use and 256-bit use. |} === CPUs with AVX === * [[Intel Corporation|Intel]] ** [[Sandy Bridge]] processors (Q1 2011) and newer, except models branded as Celeron and Pentium.<ref name="fh09g">{{Cite web |url= http://www.extremetech.com/computing/80772-intel-offers-peek-at-nehalem-and-larrabee |title= Intel Offers Peek at Nehalem and Larrabee |date= March 17, 2008 |publisher= ExtremeTech}}</ref> ** Pentium and Celeron branded processors starting with [[Tiger Lake (microprocessor)|Tiger Lake]] (Q3 2020) and newer.<ref name="9r8b9">{{Cite web|title=Intel® Celeron® 6305 Processor (4M Cache, 1.80 GHz, with IPU) Product Specifications|url=https://ark.intel.com/content/www/us/en/ark/products/208646/intel-celeron-6305-processor-4m-cache-1-80-ghz-with-ipu.html|access-date=2020-11-10|website=ark.intel.com|language=en}}</ref> * [[Advanced Micro Devices|AMD]]: ** [[Jaguar (microarchitecture)|Jaguar-based]] processors (Q2 2013) and newer Issues regarding compatibility between future Intel and AMD processors are discussed under [[XOP instruction set#Compatibility issues|XOP instruction set]]. * [[VIA Technologies|VIA]]: ** Nano QuadCore ** Eden X4 * [[Zhaoxin]]: ** WuDaoKou-based processors (KX-5000 and KH-20000) === Compiler and assembler support === * [[Absoft Fortran Compilers|Absoft]] supports with -{{Not a typo|mavx}} flag. * The [[Free Pascal]] compiler supports AVX and AVX2 with the -CfAVX and -CfAVX2 switches from version 2.7.1. * [[Delphi (software)|RAD studio]] (v11.0 Alexandria) supports AVX2 and AVX512.<ref>{{Cite web|title=What's New - RAD Studio|url=https://docwiki.embarcadero.com/RADStudio/Alexandria/en/What%27s_New#Inline_assembler_support_for_AVX_instructions_.28AVX-512.29|access-date=2021-09-17|website=docwiki.embarcadero.com}}</ref> * The [[GNU Assembler]] (GAS) inline assembly functions support these instructions (accessible via GCC), as do Intel primitives and the Intel inline assembler (closely compatible to GAS, although more general in its handling of local references within inline code). GAS supports AVX starting with binutils version 2.19.<ref name="gas-version-history">{{Cite web|title=GAS Changes|url=https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;a=blob_plain;f=gas/NEWS;hb=refs/heads/binutils-2_42-branch|website=sourceware.org|access-date=2024-05-03}}</ref> * [[GNU Compiler Collection|GCC]] starting with version 4.6 (although there was a 4.3 branch with certain support) and the Intel Compiler Suite starting with version 11.1 support AVX. * The [[Open64]] compiler version 4.5.1 supports AVX with -{{Not a typo|mavx}} flag. * [[PathScale]] supports via the -{{Not a typo|mavx}} flag. * The [[Vector Pascal]] compiler supports AVX via the -{{Not a typo|cpuAVX32}} flag. * The [[Visual Studio 2010]]/[[Visual Studio 2012|2012]] compiler supports AVX via intrinsic and {{Not a typo|/arch:AVX}} switch. * [[Netwide Assembler|NASM]] starting with version 2.03 and newer. There were numerous bug fixes and updates related to AVX in version 2.04.<ref name="nasm-version-history">{{Cite web|title=NASM - The Netwide Assembler, Appendix C: NASM Version History|url=https://nasm.us/doc/nasmdocc.html|website=nasm.us|access-date=2024-05-03}}</ref> * Other assemblers such as [[MASM]] VS2010 version, [[YASM]],<ref name="uHguP">{{cite web|title=YASM 0.7.0 Release Notes|url=http://yasm.tortall.net/releases/Release0.7.0.html|website=yasm.tortall.net}}</ref> [[FASM]] and [[JWASM]]. === Operating system support === AVX adds new register-state through the 256-bit wide YMM register file, so explicit [[operating system]] support is required to properly save and restore AVX's expanded registers between [[context switch]]es. The following operating system versions support AVX: * [[DragonFly BSD]]: support added in early 2013. * [[FreeBSD]]: support added in a patch submitted on January 21, 2012,<ref name="lSP7Y">{{citation |url= http://svnweb.freebsd.org/base?view=revision&revision=230426 |title= Add support for the extended FPU states on amd64, both for native 64bit and 32bit ABIs |publisher= svnweb.freebsd.org |date= January 21, 2012 |access-date= January 22, 2012}}</ref> which was included in the 9.1 stable release.<ref name="HIQRm">{{cite web |url= http://www.freebsd.org/releases/9.1R/announce.html |title= FreeBSD 9.1-RELEASE Announcement |access-date= May 20, 2013}}</ref> * [[Linux]]: supported since kernel version 2.6.30,<ref name="etOsK">{{citation |url= https://git.kernel.org/linus/a30469e7921a6dd2067e9e836d7787cfa0105627 |title= x86: add linux kernel support for YMM state |access-date= July 13, 2009}}</ref> released on June 9, 2009.<ref name="XB18C">{{citation |url= http://kernelnewbies.org/Linux_2_6_30 |title= Linux 2.6.30 - Linux Kernel Newbies |access-date= July 13, 2009}}</ref> * [[macOS]]: support added in 10.6.8 ([[Mac OS X Snow Leopard|Snow Leopard]]) update<ref name="3qGKK">{{citation |url= https://twitter.com/comex/status/85401002349576192 |title= Twitter |access-date= June 23, 2010}}</ref>{{Unreliable source?|date=January 2012}} released on June 23, 2011. In fact, macOS Ventura does not support processors without the AVX2 instruction set. <ref>{{cite web | url=https://arstechnica.com/gadgets/2022/08/running-macos-ventura-on-old-macs-isnt-easy-but-some-devs-are-making-progress/ | title=Devs are making progress getting macOS Ventura to run on unsupported, decade-old Macs | date=August 23, 2022 }}</ref> * [[OpenBSD]]: support added on March 21, 2015.<ref name="K5BEr">{{citation |url= http://marc.info/?l=openbsd-cvs&m=142697057503802&w=2 |title= Add support for saving/restoring FPU state using the XSAVE/XRSTOR. |access-date= March 25, 2015}}</ref> * [[Solaris (operating system)|Solaris]]: supported in Solaris 10 Update 10 and Solaris 11. * [[Microsoft Windows|Windows]]: supported in [[Windows 7]] SP1, [[Windows Server 2008 R2]] SP1,<ref name="2kEEK">{{citation |url= http://msdn.microsoft.com/en-us/library/ff545910.aspx |title= Floating-Point Support for 64-Bit Drivers |access-date= December 6, 2009}}</ref> [[Windows 8]], [[Windows 10]]. ** Windows Server 2008 R2 SP1 with Hyper-V requires a hotfix to support AMD AVX (Opteron 6200 and 4200 series) processors, [http://support.microsoft.com/kb/2568088 KB2568088] ** [[Windows XP]] and [[Windows Server 2003]] do not support AVX in both kernel drivers and user applications. == {{Anchor|AVX2}}Advanced Vector Extensions 2 == Advanced Vector Extensions 2 (AVX2), also known as '''Haswell New Instructions''',<ref name="avx2">{{citation |url= http://software.intel.com/en-us/blogs/2011/06/13/haswell-new-instruction-descriptions-now-available/ |title= Haswell New Instruction Descriptions Now Available |publisher= Software.intel.com |access-date= January 17, 2012}}</ref> is an expansion of the AVX instruction set introduced in Intel's [[Haswell (microarchitecture)|Haswell microarchitecture]]. AVX2 makes the following additions: * expansion of most vector integer SSE and AVX instructions to 256 bits * [[Gather-scatter (vector addressing)|Gather]] support, enabling vector elements to be loaded from non-contiguous memory locations * [[Word (computer architecture)#Size families|DWORD- and QWORD-]]granularity any-to-any permutes * vector shifts. Sometimes three-operand [[FMA instruction set|fused multiply-accumulate]] (FMA3) extension is considered part of AVX2, as it was introduced by Intel in the same processor microarchitecture. This is a separate extension using its own [[CPUID]] flag and is described on [[FMA instruction set#FMA3 instruction set|its own page]] and not below. === New instructions === {| class="wikitable" |- ! Instruction ! Description |- | <code>VBROADCASTSS</code>, <code>VBROADCASTSD</code> | Copy a 32-bit or 64-bit register operand to all elements of a XMM or YMM vector register. These are register versions of the same instructions in AVX1. There is no 128-bit version however, but the same effect can be simply achieved using VINSERTF128. |- | <code>VPBROADCASTB</code>, <code>VPBROADCASTW</code>, <code>VPBROADCASTD</code>, <code>VPBROADCASTQ</code> | Copy an 8, 16, 32 or 64-bit integer register or memory operand to all elements of a XMM or YMM vector register. |- | <code>VBROADCASTI128</code> | Copy a 128-bit memory operand to all elements of a YMM vector register. |- | <code>VINSERTI128</code> | Replaces either the lower half or the upper half of a 256-bit YMM register with the value of a 128-bit source operand. The other half of the destination is unchanged. |- | <code>VEXTRACTI128</code> | Extracts either the lower half or the upper half of a 256-bit YMM register and copies the value to a 128-bit destination operand. |- | <code>VGATHERDPD</code>, <code>VGATHERQPD</code>, <code>VGATHERDPS</code>, <code>VGATHERQPS</code> | [[Gather-scatter (vector addressing)|Gathers]] single or double precision floating point values using either 32 or 64-bit indices and scale. |- | <code>VPGATHERDD</code>, <code>VPGATHERDQ</code>, <code>VPGATHERQD</code>, <code>VPGATHERQQ</code> | Gathers 32 or 64-bit integer values using either 32 or 64-bit indices and scale. |- | <code>VPMASKMOVD</code>, <code>VPMASKMOVQ</code> | Conditionally reads any number of elements from a SIMD vector memory operand into a destination register, leaving the remaining vector elements unread and setting the corresponding elements in the destination register to zero. Alternatively, conditionally writes any number of elements from a SIMD vector register operand to a vector memory operand, leaving the remaining elements of the memory operand unchanged. |- | <code>VPERMPS</code>, <code>VPERMD</code> | Shuffle the eight 32-bit vector elements of one 256-bit source operand into a 256-bit destination operand, with a register or memory operand as selector. |- | <code>VPERMPD</code>, <code>VPERMQ</code> | Shuffle the four 64-bit vector elements of one 256-bit source operand into a 256-bit destination operand, with a register or memory operand as selector. |- | <code>VPERM2I128</code> | Shuffle (two of) the four 128-bit vector elements of ''two'' 256-bit source operands into a 256-bit destination operand, with an immediate constant as selector. |- | <code>VPBLENDD</code> | Doubleword immediate version of the PBLEND instructions from [[SSE4]]. |- | <code>VPSLLVD</code>, <code>VPSLLVQ</code> | Shift left logical. Allows variable shifts where each element is shifted according to the packed input. |- | <code>VPSRLVD</code>, <code>VPSRLVQ</code> | Shift right logical. Allows variable shifts where each element is shifted according to the packed input. |- | <code>VPSRAVD</code> | Shift right arithmetically. Allows variable shifts where each element is shifted according to the packed input. |} 123123 === CPUs with AVX2 === * [[Intel Corporation|Intel]] ** [[Haswell (microarchitecture)|Haswell]] processors (Q2 2013) and newer, except models branded as Celeron and Pentium.123123 ** Celeron and Pentium branded processors starting with [[Tiger Lake 231232312312312323123231232312312312323123123123231232312312312323123231231231231231232312323123231231231232312323123231232312323123123123123123231232312323123231232312323123231232312323123231231231232312323123231231231232312323123231232312323123123123231231231232312323123231232312312312323123123123231232312323123231232312312312312312323123231232312312312323123231232312312312312312312312312312323123123123123123231232312323123123123123123231232312323123123123231231231232312323123123123231232312312312312312312312323123231232312323123123123123123231231231232312323123123123123123231232312323123123123123123231232312323123231232312312312323123231232312312312312312312312323123231232312323123123123231232312312312323123231232312323123123123231232312323123123123231232312323123123123231232312323123231232312323123231231231231231231231232312323123231232312323123231231231231231231231232312323123231232312323123123123123123231232312323123231232312312312323123231231231232312323123123123231232312312312323123231232312323123231232312323123231231231232312312312312312323123231231231232312323123123123231231231232312323123231231231232312312312323123231232312323123231232312323123231232312312312323123231232312323123231232312323123231231231232312323123231231231232312323123123123231231231231231232312312312323123123123231231231231231232312323123123123123123123123231232312323123231232312323123231232312323123231232312323123231231231231231232312323123123123231232312323123231231231232312323123231232312323123123123231232312323123231232312323123231231231232312323123123123231232312323123231232312312312312312323123231232312323123123123123123123123(microprocessor123123123123123123)|Tiger Lake]] (Q3 2020) and newer.<ref name="9r8b9" /> * [[Advanced Micro Devices|AMD]] ** [[Excavator (microarchitecture)|Excavator]] processors (Q2 2015) and newer. * [[VIA Technologies|VIA]]: ** Nano QuadCore ** Eden X4 == AVX-512 == {{Main article|AVX-512}} ''AVX-512'' are 512-bit extensions to the 256-bit Advanced Vector Extensions SIMD instructions for x86 instruction set architecture proposed by [[Intel Corporation|Intel]] in July 2013, and are supported with Intel's [[Knights Landing (microarchitecture)|Knights Landing]] processor.<ref name="reinders512" /> AVX-512 instructions are encoded with the new [[EVEX prefix]]. It allows 4 operands, 8 new 64-bit [[AVX-512#Opmask registers|opmask registers]], scalar memory mode with automatic broadcast, explicit rounding control, and compressed displacement memory [[addressing mode]]. The width of the register file is increased to 512 bits and total register count increased to 32 (registers ZMM0-ZMM31) in x86-64 mode. AVX-512 consists of multiple instruction subsets, not all of which are meant to be supported by all processors implementing them. The instruction set consists of the following: * AVX-512 Foundation (F){{snd}} adds several new instructions and expands most 32-bit and 64-bit floating point SSE-SSE4.1 and AVX/AVX2 instructions with [[EVEX prefix|EVEX coding scheme]] to support the 512-bit registers, operation masks, parameter broadcasting, and embedded rounding and exception control * AVX-512 Conflict Detection Instructions (CD){{snd}} efficient conflict detection to allow more loops to be vectorized, supported by Knights Landing<ref name="reinders512" /> * AVX-512 Exponential and Reciprocal Instructions (ER){{snd}} exponential and reciprocal operations designed to help implement transcendental operations, supported by Knights Landing<ref name="reinders512" /> * AVX-512 Prefetch Instructions (PF){{snd}} new prefetch capabilities, supported by Knights Landing<ref name="reinders512" /> * AVX-512 Vector Length Extensions (VL){{snd}} extends most AVX-512 operations to also operate on XMM (128-bit) and YMM (256-bit) registers (including XMM16-XMM31 and YMM16-YMM31 in x86-64 mode)<ref name="reinders512b" /> * AVX-512 Byte and Word Instructions (BW){{snd}} extends AVX-512 to cover 8-bit and 16-bit integer operations<ref name="reinders512b">{{cite web |url= https://software.intel.com/en-us/blogs/additional-avx-512-instructions |title= Additional AVX-512 instructions |date= July 17, 2014 |author= James Reinders |publisher= [[Intel]] |access-date= August 3, 2014}}</ref> * AVX-512 Doubleword and Quadword Instructions (DQ){{snd}} enhanced 32-bit and 64-bit integer operations<ref name="reinders512b" /> * AVX-512 Integer [[Fused Multiply Add]] (IFMA){{snd}} fused multiply add for 512-bit integers.<ref name="newisa">{{cite web|url=https://software.intel.com/en-us/intel-architecture-instruction-set-extensions-programming-reference|title=Intel Architecture Instruction Set Extensions Programming Reference|publisher=[[Intel]]|format=PDF|access-date=January 29, 2014}}</ref>{{rp|746}} * AVX-512 Vector Byte Manipulation Instructions (VBMI) adds vector byte permutation instructions which are not present in AVX-512BW. * AVX-512 Vector Neural Network Instructions Word variable precision (4VNNIW){{snd}} vector instructions for deep learning. * AVX-512 Fused Multiply Accumulation Packed Single precision (4FMAPS){{snd}} vector instructions for deep learning. * VPOPCNTDQ{{snd}} count of bits set to 1.<ref name="iaiseaffpr">{{cite web|url=https://software.intel.com/en-us/intel-architecture-instruction-set-extensions-programming-reference|title=Intel® Architecture Instruction Set Extensions and Future Features Programming Reference|access-date=October 16, 2017|publisher=Intel}}</ref> * VPCLMULQDQ{{snd}} carry-less multiplication of quadwords.<ref name="iaiseaffpr" /> * AVX-512 Vector Neural Network Instructions (VNNI){{snd}} vector instructions for deep learning.<ref name="iaiseaffpr" /> * AVX-512 Galois Field New Instructions (GFNI){{snd}} vector instructions for calculating [[Galois field]].<ref name="iaiseaffpr" /> * AVX-512 Vector AES instructions (VAES){{snd}} vector instructions for [[Advanced Encryption Standard|AES]] coding.<ref name="iaiseaffpr" /> * AVX-512 Vector Byte Manipulation Instructions 2 (VBMI2){{snd}} byte/word load, store and concatenation with shift.<ref name="iaiseaffpr" /> * AVX-512 Bit Algorithms (BITALG){{snd}} byte/word [[bit manipulation]] instructions expanding VPOPCNTDQ.<ref name="iaiseaffpr" /> * AVX-512 [[Bfloat16 floating-point format|Bfloat16]] Floating-Point Instructions (BF16){{snd}} vector instructions for AI acceleration. * AVX-512 [[Half-precision floating-point format|Half-Precision]] Floating-Point Instructions (FP16){{snd}} vector instructions for operating on floating-point and complex numbers with reduced precision. Only the core extension AVX-512F (AVX-512 Foundation) is required by all implementations, though all current implementations also support CD (conflict detection). All central processors with AVX-512 also support VL, DQ and BW. The ER, PF, 4VNNIW and 4FMAPS instruction set extensions are currently only implemented in Intel computing coprocessors. The updated SSE/AVX instructions in AVX-512F use the same mnemonics as AVX versions; they can operate on 512-bit ZMM registers, and will also support 128/256 bit XMM/YMM registers (with AVX-512VL) and byte, word, doubleword and quadword integer operands (with AVX-512BW/DQ and VBMI).<ref name="newisa" />{{rp|23}} === CPUs with AVX-512 === {| class="wikitable" ! scope="col" | AVX-512 Subset ! scope="col" | F ! scope="col" | CD ! scope="col" | ER ! scope="col" | PF ! scope="col" | 4FMAPS ! scope="col" | 4VNNIW ! scope="col" | VPOPCNTDQ ! scope="col" | VL ! scope="col" | DQ ! scope="col" | BW ! scope="col" | IFMA ! scope="col" | VBMI ! scope="col" | VBMI2 ! scope="col" | BITALG ! scope="col" | VNNI ! scope="col" | BF16 ! scope="col" | VPCLMULQDQ ! scope="col" | GFNI ! scope="col" | VAES ! scope="col" | VP2INTERSECT ! scope="col" | FP16 |- ! scope="row" |Intel [[Xeon Phi#Knights Landing|Knights Landing]] (2016) |colspan="2" rowspan="9" {{Yes}} |colspan="2" rowspan="2" {{Yes}} |colspan="17" {{No}} |- ! scope="row" |Intel [[Knights Mill]] (2017) |colspan="3" {{Yes}} |colspan="14" {{No}} |- ! scope="row" |Intel [[Skylake (microarchitecture)|Skylake-SP]], [[Skylake (microarchitecture)|Skylake-X]] (2017) |colspan="4" rowspan="11" {{No}} |rowspan="4" {{No}} |colspan="3" rowspan="4" {{Yes}} |colspan="11" {{No}} |- ! scope="row" |Intel [[Cannon Lake (microarchitecture)|Cannon Lake]] (2018) |colspan="2" {{Yes}} |colspan="9" {{No}} |- ! scope="row" |Intel [[Cascade Lake (microarchitecture)|Cascade Lake-SP]] (2019) |colspan="4" {{No}} |colspan="1" {{Yes}} |colspan="6" {{No}} |- ! scope="row" |Intel [[Cooper Lake (microarchitecture)|Cooper Lake]] (2020) |colspan="4" {{No}} |colspan="2" {{Yes}} |colspan="5" {{No}} |- ! scope="row" |Intel [[Ice Lake (microarchitecture)|Ice Lake]] (2019) |colspan="9" rowspan="3" {{Yes}} |rowspan="3" {{No}} |colspan="3" rowspan="3" {{Yes}} |colspan="2" {{No}} |- !Intel [[Tiger Lake (microprocessor)|Tiger Lake]] (2020) |{{Yes}} |{{No}} |- !Intel [[Rocket Lake]] (2021) |colspan="2" {{No}} |- !Intel [[Alder Lake (microprocessor)|Alder Lake]] (2021) |colspan="2" {{Maybe|}} |colspan="15" {{Maybe|Not officially supported, but can be enabled on some motherboards with some BIOS versions{{ref|adl-avx512-note|Note 1}}}} |- !AMD [[Zen 4]] (2022) |colspan="2" rowspan="3" {{Yes}} |colspan="13" rowspan="3" {{Yes}} |colspan="2" {{No}} |- !Intel [[Sapphire Rapids (microprocessor)|Sapphire Rapids]] (2023) |{{No}} |{{Yes}} |- !AMD [[Zen 5]] (2024) | colspan="1" {{Yes}} | colspan="1" {{No}} |} <ref name="gu9Hh">{{Cite web|url=https://software.intel.com/en-us/articles/intel-software-development-emulator|title=Intel® Software Development Emulator {{!}} Intel® Software|website=software.intel.com|access-date=June 11, 2016}}</ref> {{note|adl-avx512-note|Note 1}}: AVX-512 is disabled by default in [[Alder Lake (microprocessor)|Alder Lake]] processors. On some motherboards with some BIOS versions, AVX-512 can be enabled in the BIOS, but this requires disabling E-cores.<ref>{{cite web|url=https://www.anandtech.com/show/17047/the-intel-12th-gen-core-i912900k-review-hybrid-performance-brings-hybrid-complexity/2|title=The Intel 12th Gen Core i9-12900K Review: Hybrid Performance Brings Hybrid Complexity|website=AnandTech|first1=Ian|last1=Cutress|first2=Andrei|last2=Frumusanu|language=en|access-date=2021-11-05}}</ref> However, Intel has begun fusing AVX-512 off on newer Alder Lake processors.<ref>{{Cite web |first=Paul |last=Alcorn |date=2022-03-02 |title=Intel Nukes Alder Lake's AVX-512 Support, Now Fuses It Off in Silicon |url=https://www.tomshardware.com/news/intel-nukes-alder-lake-avx-512-now-fuses-it-off-in-silicon |access-date=2022-10-03 |website=Tom's Hardware |language=en}}</ref> === Compilers supporting AVX-512 === * [[GNU Compiler Collection|GCC]] 4.9 and newer<ref name="ENvJZ">{{Cite web|url=https://gcc.gnu.org/gcc-4.9/changes.html#x86|title=GCC 4.9 Release Series — Changes, New Features, and Fixes{{snd}} GNU Project - Free Software Foundation (FSF)|website=gcc.gnu.org|language=en|access-date=April 3, 2017}}</ref> * [[Clang]] 3.9 and newer<ref name="MLilb">{{Cite web|url=http://releases.llvm.org/3.9.0/docs/ReleaseNotes.html#id10|title=LLVM 3.9 Release Notes — LLVM 3.9 documentation|website=releases.llvm.org|access-date=April 3, 2017}}</ref> * [[Intel C++ Compiler|ICC]] 15.0.1 and newer<ref name="ZZVZ5">{{Cite web|url=https://software.intel.com/en-us/articles/intel-parallel-studio-xe-2015-composer-edition-c-release-notes|title=Intel® Parallel Studio XE 2015 Composer Edition C++ Release Notes {{!}} Intel® Software|website=software.intel.com|language=en|access-date=April 3, 2017}}</ref> * Microsoft Visual Studio 2017 C++ Compiler<ref name="pZckG">{{Cite web|url=https://blogs.msdn.microsoft.com/vcblog/2017/07/11/microsoft-visual-studio-2017-supports-intel-avx-512/|language=en|title=Microsoft Visual Studio 2017 Supports Intel® AVX-512|date=July 11, 2017 }}</ref> * [[Netwide Assembler|NASM]] 2.11 and newer<ref name="nasm-version-history"/> == AVX-VNNI, AVX-IFMA{{anchor|AVX-VNNI|AVX-IFMA}} == AVX-VNNI is a [[VEX prefix|VEX]]-coded variant of the [[AVX-512#VNNI|AVX512-VNNI]] instruction set extension. Similarly, AVX-IFMA is a [[VEX prefix|VEX]]-coded variant of [[AVX-512#IFMA|AVX512-IFMA]]. These extensions provide the same sets of operations as their AVX-512 counterparts, but are limited to 256-bit vectors and do not support any additional features of [[EVEX prefix|EVEX]] encoding, such as broadcasting, opmask registers or accessing more than 16 vector registers. These extensions allow support of VNNI and IFMA operations even when full [[AVX-512]] support is not implemented in the processor. === CPUs with AVX-VNNI === * [[Intel Corporation|Intel]] ** [[Alder Lake (microprocessor)|Alder Lake]] processors (Q4 2021) and newer. * [[AMD]] ** [[Zen 5]] processors (H2 2024) and newer.<ref>{{Cite web |title=AMD Zen 5 Compiler Support Posted For GCC - Confirms New AVX Features & More |url=https://www.phoronix.com/news/AMD-Zen-5-Znver-5-GCC |access-date=2024-02-10 |website=www.phoronix.com |language=en}}</ref> === CPUs with AVX-IFMA === * [[Intel Corporation|Intel]] ** [[Sierra Forest]] E-core-only Xeon processors (H2 2024) and newer. ** Grand Ridge special-purpose processors and newer. ** [[Meteor Lake]] mobile processors (Q4 2023) and newer. ** [[Arrow Lake (microprocessor)|Arrow Lake]] desktop processors and newer. == AVX10 == AVX10, announced in August 2023, is a new, "converged" AVX instruction set. It addresses several issues of AVX-512, in particular that it is split into too many parts<ref>{{Cite web|last=Mann|first=Tobias|title=Intel's AVX10 promises benefits of AVX-512 without baggage|url=https://www.theregister.com/2023/08/15/avx10_intel_interviews/|date=2023-08-15|access-date=2023-08-20|website=www.theregister.com|language=en}}</ref> (20 feature flags) and that it makes 512-bit vectors mandatory to support. AVX10 presents a simplified CPUID interface to test for instruction support, consisting of the AVX10 version number (indicating the set of instructions supported, with later versions always being a superset of an earlier one) and the available maximum vector length (256 or 512 bits).<ref name=avx10-wp>{{cite web |title=The Converged Vector ISA: Intel® Advanced Vector Extensions 10 Technical Paper |url=https://www.intel.com/content/www/us/en/content-details/784343/the-converged-vector-isa-intel-advanced-vector-extensions-10-technical-paper.html |website=Intel}}</ref> A combined notation is used to indicate the version and vector length: for example, AVX10.2/256 indicates that a CPU is capable of the second version of AVX10 with a maximum vector width of 256 bits.<ref name=avx10-spec/> The first and "early" version of AVX10, notated AVX10.1, will ''not'' introduce any instructions or encoding features beyond what is already in AVX-512 (specifically, in Intel [[Sapphire Rapids (microprocessor)|Sapphire Rapids]]: AVX-512F, CD, VL, DQ, BW, IFMA, VBMI, VBMI2, BITALG, VNNI, GFNI, VPOPCNTDQ, VPCLMULQDQ, VAES, BF16, FP16). The second and "fully-featured" version, AVX10.2, introduces new features such as YMM embedded rounding and Suppress All Exception. For CPUs supporting AVX10 and 512-bit vectors, all legacy AVX-512 feature flags will remain set to facilitate applications supporting AVX-512 to continue using AVX-512 instructions.<ref name=avx10-spec>{{cite web |title=Intel® Advanced Vector Extensions 10 (Intel® AVX10) Architecture Specification |url=https://www.intel.com/content/www/us/en/content-details/784267/intel-advanced-vector-extensions-10-intel-avx10-architecture-specification.html |website=Intel}}</ref> AVX10.1/512 will be available on [[Granite Rapids]].<ref name=avx10-spec/> == APX == {{main|X86#APX (Advanced Performance Extensions)}} APX is a new extension. It is not focused on vector computation, but provides RISC-like extensions to the x86-64 architecture by doubling the number of general purpose registers to 32 and introducing three-operand instruction formats. AVX is only tangentially affected as APX introduces extended operands.<ref>{{cite web |title=Intel® Advanced Performance Extensions (Intel® APX) Architecture Specification |url=https://www.intel.com/content/www/us/en/content-details/784266/intel-advanced-performance-extensions-intel-apx-architecture-specification.html?wapkw=apx |publisher=Intel}}</ref><ref>{{Cite web|last=Robinson|first=Dan|title=Intel discloses x86 and vector instructions for future chips|url=https://www.theregister.com/2023/07/26/intel_x86_vector_instructions/|date=2023-07-26|access-date=2023-08-20|website=www.theregister.com|language=en}}</ref> == Applications == * Suitable for [[floating point]]-intensive calculations in multimedia, scientific and financial applications (AVX2 adds support for [[integer]] operations). * Increases parallelism and throughput in floating point [[SIMD]] calculations. * Reduces register load due to the non-destructive instructions. * Improves Linux RAID software performance (required AVX2, AVX is not sufficient)<ref name="K5FHF">{{Cite web |url= https://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=2c935842bdb46f5f557426feb4d2bdfdad1aa5f9 |archive-url= https://archive.today/20130415065954/https://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=2c935842bdb46f5f557426feb4d2bdfdad1aa5f9 |url-status= dead |archive-date= April 15, 2013 |title= Linux RAID |date= February 17, 2013 |publisher= LWN}}</ref> === Software === * [[Blender (software)|Blender]] uses AVX, AVX2 and AVX-512 in the Cycles render engine.<ref>{{cite web |last1=Jaroš |first1=Milan |last2=Strakoš |first2=Petr |last3=Říha |first3=Lubomír |title=Rendering in Blender using AVX-512 Vectorization |url=https://www.ixpug.org/documents/1520629330Jaros-IXPUG-CINECABlender5.pdf |website=Intel eXtreme Performance Users Group |publisher=[[VSB – Technical University of Ostrava|Technical University of Ostrava]] |access-date=28 October 2022 |language=en |date=28 May 2022 }}</ref> * Bloombase uses AVX, AVX2 and AVX-512 in their Bloombase Cryptographic Module (BCM). * [[Botan (programming library)|Botan]] uses both AVX and AVX2 when available to accelerate some algorithms, like ChaCha. * [[BSAFE]] C toolkits uses AVX and AVX2 where appropriate to accelerate various cryptographic algorithms.<ref>{{cite web|url=https://www.dell.com/support/kbdoc/000190141/comparison-of-bsafe-cryptographic-library-implementations|title=Comparison of BSAFE cryptographic library implementations|date=July 25, 2023}}</ref> * [[Crypto++]] uses both AVX and AVX2 when available to accelerate some algorithms, like Salsa and ChaCha. * [[Esri]] [[ArcGIS]] Data Store uses AVX2 for graph storage.<ref>{{Cite web |title=ArcGIS Data Store 11.2 System Requirements |url=https://enterprise.arcgis.com/en/system-requirements/latest/windows/arcgis-data-store-system-requirements.htm#ESRI_SECTION1_40B68391470C40E0B06DFA2A4412C6E7 |access-date=2024-01-24 |website=ArcGIS Enterprise}}</ref> * [[OpenSSL]] uses AVX- and AVX2-optimized cryptographic functions since version 1.0.2.<ref name="04fXZ">{{Cite web |title=Improving OpenSSL Performance|date=May 26, 2015|access-date=February 28, 2017|url=https://software.intel.com/en-us/articles/improving-openssl-performance}}</ref> Support for AVX-512 was added in version 3.0.0.<ref>{{cite web|url=https://github.com/openssl/openssl/blob/openssl-3.0.0/CHANGES.md|title=OpenSSL 3.0.0 release notes|website=[[GitHub]] |date=September 7, 2021}}</ref> Some of these optimizations are also present in various clones and forks, like LibreSSL. * [[Prime95]]/MPrime, the software used for [[GIMPS]], started using the AVX instructions since version 27.1, AVX2 since 28.6 and AVX-512 since 29.1.<ref>{{cite web|url=https://www.mersenne.org/download/whatsnew_308b15.txt|title=Prime95 release notes|access-date=2022-07-10}}</ref> * [https://code.videolan.org/videolan/dav1d dav1d] [[AV1]] decoder can use AVX2 and AVX-512 on supported CPUs.<ref name="O9KLK">{{Cite web |title=dav1d: performance and completion of the first release|date=November 21, 2018|access-date=November 22, 2018|url=http://www.jbkempf.com/blog/post/2018/dav1d-toward-the-first-release}}</ref><ref>{{cite web|url=https://code.videolan.org/videolan/dav1d/-/tags/0.6.0|title=dav1d 0.6.0 release notes|date=March 6, 2020}}</ref> * [https://gitlab.com/AOMediaCodec/SVT-AV1 SVT-AV1] [[AV1]] encoder can use AVX2 and AVX-512 to accelerate video encoding.<ref>{{cite web|url=https://gitlab.com/AOMediaCodec/SVT-AV1/-/blob/078d27e0bdc52cad2dc28e630fdd6391857e044e/CHANGELOG.md#070-2019-09-26|title=SVT-AV1 0.7.0 release notes|date=September 26, 2019}}</ref> * [[dnetc]], the software used by [[distributed.net]], has an AVX2 core available for its RC5 project and will soon release one for its OGR-28 project. * [[Einstein@Home]] uses AVX in some of their distributed applications that search for [[gravitational waves]].<ref name="GLExU">{{Cite web |title= Einstein@Home Applications | url= http://einsteinathome.org/apps.php}}</ref> * [[Folding@home]] uses AVX on calculation cores implemented with [[GROMACS]] library. * Helios uses AVX and AVX2 hardware acceleration on 64-bit x86 hardware.<ref>{{Cite web|title=FAQ, Helios|url=https://heliosmusic.io/faq/|access-date=2021-07-05|website=Helios|language=en-US}}</ref> * [[Horizon: Zero Dawn]] uses AVX in its Decima game engine. * [[PCSX2]] and [[RPCS3]], are open source [[PlayStation 2|PS2]] and [[PlayStation 3|PS3]] emulators respectively that use AVX2 and [[AVX-512]] instructions to emulate games. * [[Network Device Interface]], an IP video/audio protocol developed by NewTek for live broadcast production, uses AVX and AVX2 for increased performance. * [[TensorFlow]] since version 1.6 and tensorflow above versions requires CPU supporting at least AVX.<ref name="MDJ95">{{Cite web |title= Tensorflow 1.6 | website= [[GitHub]]| url= https://github.com/tensorflow/tensorflow/releases/tag/v1.6.0}}</ref> * [[x264]], [[x265]] and [[Versatile Video Coding|VTM]] video encoders can use AVX2 or AVX-512 to speed up encoding. * Various CPU-based [[cryptocurrency]] miners (like pooler's cpuminer for [[Bitcoin]] and [[Litecoin]]) use AVX and AVX2 for various cryptography-related routines, including [[SHA-256]] and [[scrypt]]. * [[libsodium]] uses AVX in the implementation of scalar multiplication for [[Curve25519]] and [[Ed25519]] algorithms, AVX2 for [[BLAKE2b]], [[Salsa20]], [[ChaCha20]], and AVX2 and AVX-512 in implementation of [[Argon2]] algorithm. * [[libvpx]] open source reference implementation of VP8/VP9 encoder/decoder, uses AVX2 or AVX-512 when available. * [[libjpeg-turbo]] uses AVX2 to accelerate image processing. * [[FFTW]] can utilize AVX, AVX2 and AVX-512 when available. * LLVMpipe, a software OpenGL renderer in [[Mesa (computer graphics)|Mesa]] using Gallium and [[LLVM]] infrastructure, uses AVX2 when available. * [[glibc]] uses AVX2 (with [[FMA instruction set|FMA]]) and AVX-512 for optimized implementation of various mathematical (i.e. <code>expf</code>, <code>sinf</code>, <code>powf</code>, <code>atanf</code>, <code>atan2f</code>) and string (<code>memmove</code>, <code>memcpy</code>, etc.) functions in [[libc]]. * [[Linux kernel]] can use AVX or AVX2, together with [[AES instruction set#x86 architecture processors|AES-NI]] as optimized implementation of [[AES-GCM]] cryptographic algorithm. * [[Linux kernel]] uses AVX or AVX2 when available, in optimized implementation of multiple other cryptographic ciphers: [[Camellia (cipher)|Camellia]], [[CAST5]], [[CAST6]], [[Serpent (cipher)|Serpent]], [[Twofish]], [[MORUS-1280]], and other primitives: [[Poly1305]], [[SHA-1]], [[SHA-256]], [[SHA-512]], [[ChaCha20]]. * POCL, a portable Computing Language, that provides implementation of [[OpenCL]], makes use of AVX, AVX2 and AVX-512 when possible. * [[.NET]] and [[.NET Framework]] can utilize AVX, AVX2 through the generic <code>System.Numerics.Vectors</code> namespace. * [[.NET Core]], starting from version 2.1 and more extensively after version 3.0 can directly use all AVX, AVX2 intrinsics through the <code>System.Runtime.Intrinsics.X86</code> namespace. * [[EmEditor]] 19.0 and above uses AVX2 to speed up processing.<ref name="L6yrb">[https://www.emeditor.com/text-editor-features/history/new-in-version-19-0/ New in Version 19.0 – EmEditor (Text Editor)]</ref> * Native Instruments' Massive X softsynth requires AVX.<ref name="BUArc">{{Cite web|url=http://support.native-instruments.com/hc/en-us/articles/360001544678-MASSIVE-X-Requires-AVX-Compatible-Processor|title=MASSIVE X Requires AVX Compatible Processor|website=Native Instruments|language=en-US|access-date=2019-11-29}}</ref> * [[Microsoft Teams]] uses AVX2 instructions to create a blurred or custom background behind video chat participants,<ref name="nSmsk">{{Cite web|url=https://docs.microsoft.com/en-us/microsoftteams/hardware-requirements-for-the-teams-app|title=Hardware requirements for Microsoft Teams|website=Microsoft|language=en-US|access-date=2020-04-17}}</ref> and for background noise suppression.<ref name="6JbHG">{{cite web |title=Reduce background noise in Teams meetings |url=https://support.microsoft.com/en-gb/office/reduce-background-noise-in-teams-meetings-1a9c6819-137d-4b3b-a1c8-4ab20b234c0d |website=Microsoft Support |access-date=5 January 2021}}</ref> * [[Pale Moon]] custom Windows builds greatly increase browsing speed due to the use of AVX2. * [https://github.com/simdjson/simdjson {{Not a typo|simdjson}}], a [[JSON]] parsing library, uses AVX2 and AVX-512 to achieve improved decoding speed.<ref name="33car">{{cite journal |last1=Langdale |first1=Geoff |last2=Lemire |first2=Daniel |arxiv=1902.08318 |title=Parsing Gigabytes of JSON per Second |journal=The VLDB Journal |date=2019 |volume=28 |issue=6 |pages=941–960 |doi=10.1007/s00778-019-00578-5 |s2cid=67856679}}</ref><ref>{{cite web|url=https://github.com/simdjson/simdjson/releases/tag/v2.1.0|title={{Not a typo|simdjson}} 2.1.0 release notes|website=[[GitHub]] |date=June 30, 2022}}</ref> * [https://github.com/intel/x86-simd-sort x86-simd-sort], a library with sorting algorithms for 16, 32 and 64-bit numeric data types, uses AVX2 and AVX-512. The library is used in [[NumPy]] and [[OpenJDK]] to accelerate sorting algorithms.<ref>{{cite web|url=https://www.phoronix.com/news/Intel-x86-simd-sort-3.0|last=Larabel|first=Michael|website=Phoronix|title=OpenJDK Merges Intel's x86-simd-sort For Speeding Up Data Sorting 7~15x|date=October 6, 2023}}</ref> * [https://github.com/zlib-ng/zlib-ng zlib-ng], an optimized version of [[zlib]], contains AVX2 and AVX-512 versions of some data compression algorithms. * [[Tesseract (software)|Tesseract OCR]] engine uses AVX, AVX2 and AVX-512 to accelerate character recognition.<ref>{{cite web|url=https://www.phoronix.com/scan.php?page=news_item&px=Tesseract-OCR-5.2-Released|last=Larabel|first=Michael|website=Phoronix|title=Tesseract OCR 5.2 Engine Finds Success With AVX-512F|date=July 7, 2022}}</ref> == Downclocking == Since AVX instructions are wider, they consume more power and generate more heat. Executing heavy AVX instructions at high CPU clock frequencies may affect CPU stability due to excessive [[voltage droop]] during load transients. Some Intel processors have provisions to reduce the [[Intel Turbo Boost|Turbo Boost]] frequency limit when such instructions are being executed. This reduction happens even if the CPU hasn't reached its thermal and power consumption limits. On [[Skylake (microarchitecture)|Skylake]] and its derivatives, the throttling is divided into three levels:<ref name="lic">{{cite web |last1=Lemire |first1= Daniel |title=AVX-512: when and how to use these new instructions |url=https://lemire.me/blog/2018/09/07/avx-512-when-and-how-to-use-these-new-instructions/ |website=Daniel Lemire's blog|date= September 7, 2018 }}</ref><ref name="q41V7">{{cite web |author1=BeeOnRope |title=SIMD instructions lowering CPU frequency |url=https://stackoverflow.com/a/56861355 |website=Stack Overflow}}</ref> * L0 (100%): The normal turbo boost limit. * L1 (~85%): The "AVX boost" limit. Soft-triggered by 256-bit "heavy" (floating-point unit: FP math and integer multiplication) instructions. Hard-triggered by "light" (all other) 512-bit instructions. * L2 (~60%):{{Dubious|Re: Downclocking|date=July 2021}} The "AVX-512 boost" limit. Soft-triggered by 512-bit heavy instructions. The frequency transition can be soft or hard. Hard transition means the frequency is reduced as soon as such an instruction is spotted; soft transition means that the frequency is reduced only after reaching a threshold number of matching instructions. The limit is per-thread.<ref name="lic" /> In [[Ice Lake (microarchitecture)|Ice Lake]], only two levels persist:<ref name="icl-avx512-freq">{{cite web |last1=Downs |first1=Travis |title=Ice Lake AVX-512 Downclocking |url=https://travisdowns.github.io/blog/2020/08/19/icl-avx512-freq.html |website=Performance Matters blog|date=August 19, 2020 }}</ref> * L0 (100%): The normal turbo boost limit. * L1 (~97%): Triggered by any 512-bit instructions, but only when single-core boost is active; not triggered when multiple cores are loaded. [[Rocket Lake]] processors do not trigger frequency reduction upon executing any kind of vector instructions regardless of the vector size.<ref name="icl-avx512-freq" /> However, downclocking can still happen due to other reasons, such as reaching thermal and power limits. Downclocking means that using AVX in a mixed workload with an Intel processor can incur a frequency penalty. Avoiding the use of wide and heavy instructions help minimize the impact in these cases. AVX-512VL allows for using 256-bit or 128-bit operands in AVX-512 instructions, making it a sensible default for mixed loads.<ref name="LttMf">{{cite web |title=x86 - AVX 512 vs AVX2 performance for simple array processing loops |url=https://stackoverflow.com/a/52543573 |website=Stack Overflow}}</ref> On supported and unlocked variants of processors that down-clock, the clock ratio reduction offsets (typically called AVX and AVX-512 offsets) are adjustable and may be turned off entirely (set to 0x) via Intel's Overclocking / Tuning utility or in BIOS if supported there.<ref name="intelXTUGuide">{{cite web |title=Intel® Extreme Tuning Utility (Intel® XTU) Guide to Overclocking : Advanced Tuning |url=https://www.intel.com/content/www/us/en/gaming/resources/overclocking-xtu-guide.html#articleparagraph-14 |website=Intel |access-date=18 July 2021 |language=en |quote=See image in linked section, where AVX2 ratio has been set to 0.}}</ref> == See also == * [[F16C]] instruction set extension * [[Memory Protection Extensions]] * [[AArch64#Scalable Vector Extension (SVE)|Scalable Vector Extension for ARM]] - a new vector instruction set (supplementing [[VFP (instruction set)|VFP]] and [[NEON (instruction set)|NEON]]) similar to AVX-512, with some additional features. == References == {{reflist|1=30em}} == External links == * [https://software.intel.com/sites/landingpage/IntrinsicsGuide/ Intel Intrinsics Guide] * [https://docs.oracle.com/cd/E36784_01/pdf/E36859.pdf x86 Assembly Language Reference Manual] {{AMD technology}} {{Intel technology}} {{Multimedia extensions|state=uncollapsed}} [[Category:X86 instructions]] [[Category:SIMD computing]] [[Category:AMD technologies]]'
Unified diff of changes made by edit ($1) (edit_diff)
'@@ -295,9 +295,9 @@ | Shift right arithmetically. Allows variable shifts where each element is shifted according to the packed input. |} - +123123 === CPUs with AVX2 === * [[Intel Corporation|Intel]] -** [[Haswell (microarchitecture)|Haswell]] processors (Q2 2013) and newer, except models branded as Celeron and Pentium. -** Celeron and Pentium branded processors starting with [[Tiger Lake (microprocessor)|Tiger Lake]] (Q3 2020) and newer.<ref name="9r8b9" /> +** [[Haswell (microarchitecture)|Haswell]] processors (Q2 2013) and newer, except models branded as Celeron and Pentium.123123 +** Celeron and Pentium branded processors starting with [[Tiger Lake 231232312312312323123231232312312312323123123123231232312312312323123231231231231231232312323123231231231232312323123231232312323123123123123123231232312323123231232312323123231232312323123231231231232312323123231231231232312323123231232312323123123123231231231232312323123231232312312312323123123123231232312323123231232312312312312312323123231232312312312323123231232312312312312312312312312312323123123123123123231232312323123123123123123231232312323123123123231231231232312323123123123231232312312312312312312312323123231232312323123123123123123231231231232312323123123123123123231232312323123123123123123231232312323123231232312312312323123231232312312312312312312312323123231232312323123123123231232312312312323123231232312323123123123231232312323123123123231232312323123123123231232312323123231232312323123231231231231231231231232312323123231232312323123231231231231231231231232312323123231232312323123123123123123231232312323123231232312312312323123231231231232312323123123123231232312312312323123231232312323123231232312323123231231231232312312312312312323123231231231232312323123123123231231231232312323123231231231232312312312323123231232312323123231232312323123231232312312312323123231232312323123231232312323123231231231232312323123231231231232312323123123123231231231231231232312312312323123123123231231231231231232312323123123123123123123123231232312323123231232312323123231232312323123231232312323123231231231231231232312323123123123231232312323123231231231232312323123231232312323123123123231232312323123231232312323123231231231232312323123123123231232312323123231232312312312312312323123231232312323123123123123123123123(microprocessor123123123123123123)|Tiger Lake]] (Q3 2020) and newer.<ref name="9r8b9" /> * [[Advanced Micro Devices|AMD]] ** [[Excavator (microarchitecture)|Excavator]] processors (Q2 2015) and newer. '
New page size ($1) (new_size)
55428
Old page size ($1) (old_size)
53760
Size change in edit ($1) (edit_delta)
1668
Lines added in edit ($1) (added_lines)
[ 0 => '123123', 1 => '** [[Haswell (microarchitecture)|Haswell]] processors (Q2 2013) and newer, except models branded as Celeron and Pentium.123123', 2 => '** Celeron and Pentium branded processors starting with [[Tiger Lake 231232312312312323123231232312312312323123123123231232312312312323123231231231231231232312323123231231231232312323123231232312323123123123123123231232312323123231232312323123231232312323123231231231232312323123231231231232312323123231232312323123123123231231231232312323123231232312312312323123123123231232312323123231232312312312312312323123231232312312312323123231232312312312312312312312312312323123123123123123231232312323123123123123123231232312323123123123231231231232312323123123123231232312312312312312312312323123231232312323123123123123123231231231232312323123123123123123231232312323123123123123123231232312323123231232312312312323123231232312312312312312312312323123231232312323123123123231232312312312323123231232312323123123123231232312323123123123231232312323123123123231232312323123231232312323123231231231231231231231232312323123231232312323123231231231231231231231232312323123231232312323123123123123123231232312323123231232312312312323123231231231232312323123123123231232312312312323123231232312323123231232312323123231231231232312312312312312323123231231231232312323123123123231231231232312323123231231231232312312312323123231232312323123231232312323123231232312312312323123231232312323123231232312323123231231231232312323123231231231232312323123123123231231231231231232312312312323123123123231231231231231232312323123123123123123123123231232312323123231232312323123231232312323123231232312323123231231231231231232312323123123123231232312323123231231231232312323123231232312323123123123231232312323123231232312323123231231231232312323123123123231232312323123231232312312312312312323123231232312323123123123123123123123(microprocessor123123123123123123)|Tiger Lake]] (Q3 2020) and newer.<ref name="9r8b9" />' ]
Lines removed in edit ($1) (removed_lines)
[ 0 => '', 1 => '** [[Haswell (microarchitecture)|Haswell]] processors (Q2 2013) and newer, except models branded as Celeron and Pentium.', 2 => '** Celeron and Pentium branded processors starting with [[Tiger Lake (microprocessor)|Tiger Lake]] (Q3 2020) and newer.<ref name="9r8b9" />' ]
Whether or not the change was made through a Tor exit node ($1) (tor_exit_node)
false
Unix timestamp of change ($1) (timestamp)
'1715490120'