中文字幕第二一区_久久久久在线视频_精品国产自在现线看久久_亚洲精品一区二区三区电影网

產品分類

當前位置: 首頁 > 工業電氣產品 > 端子與連接器 > 線路板連接器 > FFC連接器

類型分類:
科普知識
數據分類:
FFC連接器

Ethernet Throughput on NXP ARM Microcontrollers

發布日期:2022-04-17 點擊率:47

       
This article presents a method for measuring Ethernet throughput, providing a good estimate of performance, and illustrating the different factors that affect performance.

Ethernet is the most widely installed Local Area Network (LAN) technology in the world. It has been in use since the early 1980s and is covered by the IEEE Std 802.3, which specifies a number of speed grades. In embedded systems, the most commonly used format runs at both 10 Mbps and 100 Mbps (and is often referred to as 10/100 Ethernet).

There are more than 20 NXP ARM MCUs with built-in Ethernet, covering all three generations of ARM (ARM7, ARM9, and the Cortex-M3). NXP uses essentially the same implementation across three generations, so designers can save time and resources by reusing their Ethernet function when systems move to the next generation of ARM.

This article discusses three different scenarios for measuring Ethernet throughput on the LPC1700 product and details what is really achievable in an optimized system.

Superior implementation

NXP's Ethernet block (see Figure 1) contains a full-featured 10/100 Ethernet MAC (media access controller) which uses DMA hardware acceleration to increase performance. The MAC is fully compliant with IEEE Std 802.3 and interfaces with an off-chip Ethernet PHY (physical layer) using the Media Independent Interface (MII) or Reduced MII (RMII) protocol along with the on-chip MII Management (MIIM) serial bus.

The NXP Ethernet block is distinguished by the following:

  • Full Ethernet functionality — The block supports full Ethernet operation, as specified in the 802.3 standard.

  • Enhanced architecture — NXP has enhanced the architecture with several additional features including receive filtering, automatic collision back-off and frame retransmission, and power management via clock switching.

  • DMA hardware acceleration — The block has two DMA managers, one each for transmit and receive. Automatic frame transmission and reception with Scatter-Gather DMA offloads the CPU even further.

LPC24xx ethernet block

Cortex-M3 architecture" src="/~/media/Images/Article Library/TechZone Articles/2011/July/Ethernet Throughput on NXP ARM Microcontrollers/TZM113_Ethernet_Throughout_Fig_1a.jpg" alt="PC24xx ethernet block"/>

Figure 1: LPC24xx Ethernet block diagram. NXP's Cortex-M3 architecture.


Ethernet throughput on NXP's LPC1700 microcontrollers

In an Ethernet network, two or more stations send and receive data through a shared channel (a medium), using the Ethernet protocol. Ethernet performance can mean different things for each of the network's elements (channel or stations). Bandwidth, throughput, and latency are measures which contribute to overall performance. In the case of the channel, while the bandwidth is a measure of the capacity of the link, the throughput is the rate at which usable data can be sent over the channel. In the case of the stations, Ethernet performance can mean the ability of that equipment to operate at the full bit and frame rate of the Ethernet channel. On the other hand, latency measures the delay in time caused by several factors (such as propagation times, processing times, faults, and retries).

The focus of this article will be on the ability of the NXP LPC1700 to operate at the full bit and frame rate of the Ethernet channel to which it is connected, via the Ethernet interface (provided by the internal EMAC module plus the external PHY chip). In this way, throughput will be defined as a measure of usable data (payload) per second, which the MCU is able to send/receive to/from the communication channel. The same concepts can also be applied to other NXP LPC microcontrollers supporting Ethernet.

Unfortunately, these kinds of tests generally require specific equipment such as network analyzers and/or network traffic generators, in order to get precise measurements. Nevertheless, using simple test setups it is possible to get estimated numbers. In fact, our goal is to understand the different factors that can affect Ethernet throughput, so users can focus on different techniques in order to improve Ethernet performance.

Here only the throughput of the transmitter is considered, as the case of the receiver is a little bit more complex because its performance is relative to the performance of the transmitter that put the information into the channel. In this case, the throughput of the receiver will be affected by the throughput of the transmitter sending the data over the channel. once we get the throughput for the transmitter, we can consider this number as the maximum ideal number that the receiver will be able to achieve (under ideal conditions), and get the throughput for the receiver relative to this number.

Reference information

Ethernet II f<em></em>rame


Figure 2: Ethernet II frame.


Considering a bit rate of 100 Mbps, and that every frame consists of the payload (useful data, minimum 46 bytes and maximum 1,500 bytes), the Ethernet header (14 bytes), the CRC (4 bytes), the preamble (8 bytes), and the inter-packet gap (12 bytes), then the following are the maximum possible frames per second and throughout:

For minimum-sized frames: (46 bytes of data) —> 148,809 frames/sec —> 6.84 Mb/sec

For maximum-sized frames: (1,500 bytes of data) —> 8,127 frames/sec —> 12.19 Mb/sec

The above rates are the maximum possible values which are in reality impossible to reach. Those values are ideal and any practical implementation will have lower values (see Figure 2).

Notes:

  • frames/second is calculated by dividing the Ethernet link speed (100 Mbps) by the total frame size in bits (84 * 8 = 672 for minimum-sized frames, and 1,538 * 8 = 12,304 for maximum-sized frames).

  • Megabytes/second is calculated by multiplying the frames/second by the number of bytes of useful data in each frame (46 bytes for minimum-sized frames, and 1,500 bytes for maximum-sized frames).

Test conditions (see Figure 3)

MCU: LPC1768 running at 100 MHz
Board: Keil MCB1700
PHY chip: National DP83848 (RMII interface)
Tool chain: Keil μVision4 v4.1
Code running from RAM
TxDescriptorNumber = 3
Ethernet mode: Full duplex – 100 Mbps

Test description

In order to get the maximum throughput, there are 50 frames consisting of 1,514 bytes (including Ethernet header), each consisting 75 Kb of payload (useful data). The CRC (4 bytes) is automatically added by the EMAC controller (Ethernet controller).

Test setup


Figure 3: The test setup.


In order to measure the time this process takes, a GPIO pin is set (P0.0 in our case) just before starting to send the frames and is cleared as soon as we finish with the process. In this way, an oscilloscope can be used to measure the time as the width of the generated pulse at the P0.0 pin. The board is connected to a PC using an Ethernet cross cable.

The PC runs a sniffer program (WireShark in this case, http://www.wireshark.org/) as a way to verify whether the 50 frames were sent and the data is correct. A specific pattern in the payload is used so any errors can be easily recognized. If the 50 frames arrive at the PC with no errors, the test is considered valid (see Figure 4).

Verifying payload


Figure 4: Verifying the payload.


Test scenarios

The EMAC uses a series of descriptors which provide pointers to memory positions where the data buffers, control, and status information reside. In the case of transmission, the frame data should be placed by the application into these data buffers. The EMAC uses DMA to get the user's data and fill the frame's payload before transmission. Therefore, the method the application uses in order to copy the application data into those data buffers will affect the overall measurement of the throughput. For this reason, three different scenarios are presented:

  1. An "ideal" scenario, which doesn't consider the application at all,

  2. A "typical" scenario, where the application copies the application's data into the EMAC's data buffers, using the processor,

  3. An "optimized" scenario, where the application copies the application's data into the EMAC's data buffers, via DMA.

Scenarios description

  1. "Ideal" scenario: In this case, the software sets up the descriptors' data buffers with the test's pattern, and only the TxProduceIndex is incremented 50 times (once for every packet to send) in order to trigger the frame transmission. In other words, the application is not considered at all. Even though this is not a typical user's case, it will provide the maximum possible throughput in transmission.

  2. "Typical" scenario: This case represents the typical case in which the application will copy the data into the descriptors' data buffers before sending the frame. Comparing the results of this case with the previous one, it is apparent that the application is affecting the overall performance. This case should not be considered as the actual EMAC throughput. However, it is presented here to illustrate how non-optimized applications may lower overall results giving the impression that the hardware is too slow.

  3. "Optimized" scenario: This test uses DMA in order to copy the application's data into the descriptors' data buffers. This case considers a real application but using optimized methods which effectively take advantage of the fast LPC1700 hardware.

Software

Test software in the form of a Keil MDK project is provided for this article (please check NXP's website for AN11053). The desired scenarios can be selected by using the Configuration Wizard and opening the "config.h" file (see Figure 5). Besides the scenario, both the number of packets to send and the frame size can also be modified through this file.

Test results

After running the tests, the following results are tabulated as demonstrated in Table 1:

 frames SentPayload (bytes)Total Data (bytes)Time (msec)Throughput (Mbytes/sec)% relative to Max. Possible
Max Possible    12.19100.0%
Scenario 1501500750006.2512.0098.44%
Scenario 25015007500010.447.1858.93%
Scenario 3501500750007.110.5686.66%


Table 1: Test results.



Choosing test scenarios


Figure 5: Choosing the test scenarios.


Conclusion

Despite the fact that Scenario 1 is not a practical case, it provides the maximum value possible for our hardware as a reference, which is very close to the maximum possible for Ethernet at 100 Mbps. In Scenario 2, the application's effects on the overall performance become apparent. Finally, Scenario 3 shows how an optimized application greatly improves the overall throughput.

Other ways to optimize the application and get better results were found by running the code from flash (instead of from RAM), and in some cases by increasing the number of descriptors.

In summary, Ethernet throughput is mainly affected by how the application transfers data from the application buffer to the descriptors' data buffers. Improving this process will enhance overall Ethernet performance. The LPC1700 and other LPC parts have this optimization built in to the system hardware with DMA support, enhanced EMAC hardware, and smart memory bus architecture.


下一篇: PLC、DCS、FCS三大控

上一篇: Designing Multithrea

推薦產品

更多
中文字幕第二一区_久久久久在线视频_精品国产自在现线看久久_亚洲精品一区二区三区电影网

      9000px;">

          国产自产2019最新不卡| 色老汉一区二区三区| av在线不卡观看免费观看| 欧美一卡二卡在线观看| 日韩一区精品视频| 欧美一级免费大片| 青青草精品视频| 欧美人牲a欧美精品| 毛片av一区二区| 国产亚洲精品福利| 91免费国产视频网站| 日韩高清不卡在线| 久久综合色8888| 99re这里只有精品首页| 天天影视涩香欲综合网| 欧美精品一区二区三区久久久| 国产不卡在线视频| 香蕉久久一区二区不卡无毒影院 | 99re热这里只有精品视频| 日韩影院免费视频| 国产精品久久国产精麻豆99网站| 欧美优质美女网站| 久久精品亚洲一区二区三区浴池| 91麻豆视频网站| av一区二区三区在线| 亚洲一区二区在线免费看| 91精品国产色综合久久久蜜香臀| 久久99久久99小草精品免视看| 久久日韩粉嫩一区二区三区| 色婷婷综合久久久久中文一区二区 | 欧美精品免费视频| 高清成人免费视频| 裸体在线国模精品偷拍| 日韩精品乱码免费| 一级做a爱片久久| 亚洲bdsm女犯bdsm网站| 亚洲成在线观看| 欧美影院一区二区三区| 亚洲免费大片在线观看| 日韩专区一卡二卡| 看片的网站亚洲| 久久99精品久久久久久国产越南| 精品一区二区三区在线观看国产| 蜜桃视频第一区免费观看| 日产国产欧美视频一区精品| 婷婷激情综合网| 狠狠狠色丁香婷婷综合激情| 国产不卡高清在线观看视频| 日韩av二区在线播放| 精品在线视频一区| 成人成人成人在线视频| 成人黄色国产精品网站大全在线免费观看 | 精品亚洲成av人在线观看| 日本中文字幕一区二区视频| 蜜桃一区二区三区四区| 国产成人午夜精品5599| 在线看日本不卡| 亚洲国产精品成人综合色在线婷婷| 亚洲制服丝袜av| 麻豆成人久久精品二区三区小说| 丰满放荡岳乱妇91ww| 欧美午夜免费电影| 国产欧美精品一区| 久久精品国内一区二区三区| 欧美三级蜜桃2在线观看| 日本一区二区免费在线| 九一久久久久久| 欧美日韩国产综合视频在线观看| 国产精品美女www爽爽爽| 日本不卡免费在线视频| 日本韩国精品在线| 1区2区3区精品视频| 国产电影一区在线| 欧美日韩一区二区三区免费看| 欧美国产一区二区在线观看| 久久超级碰视频| 欧美三级电影网| 亚洲猫色日本管| a级高清视频欧美日韩| 国产精品私人影院| 国产91在线观看丝袜| 日韩免费观看高清完整版在线观看| 天天操天天干天天综合网| 欧美色综合网站| 日韩精品成人一区二区在线| 日韩精品一区二区三区视频| 久久精品国产亚洲5555| 久久久精品中文字幕麻豆发布| 国产一本一道久久香蕉| 亚洲精品一区二区三区四区高清| 黑人精品欧美一区二区蜜桃| 久久久美女毛片| 成人午夜激情影院| 亚洲va欧美va人人爽午夜| 欧美一区二区视频免费观看| 久久成人羞羞网站| 中文字幕人成不卡一区| 在线观看日韩av先锋影音电影院| 亚洲一区影音先锋| 久久精品人人做| 在线观看日韩国产| 激情亚洲综合在线| 亚洲欧美在线aaa| 欧美色偷偷大香| www.视频一区| 国产精品亚洲а∨天堂免在线| 狠狠色丁香婷婷综合久久片| 综合久久国产九一剧情麻豆| 日韩一区二区免费视频| 波波电影院一区二区三区| 亚洲一二三四在线观看| 久久精品免费在线观看| 91国产视频在线观看| 高清不卡在线观看av| 美女视频黄频大全不卡视频在线播放| 国产日产亚洲精品系列| 亚洲精品一区二区三区福利| 91精品国产综合久久久久久久久久 | 久久精品无码一区二区三区| 精品污污网站免费看| 色综合天天视频在线观看 | 中文字幕中文乱码欧美一区二区 | 日本精品一级二级| 色综合一区二区| 欧美性猛片xxxx免费看久爱| 色婷婷久久久亚洲一区二区三区| 99久久精品久久久久久清纯| av综合在线播放| 欧美日韩一区二区三区视频| 色婷婷一区二区| 欧美一区二区三区免费大片| 日韩你懂的电影在线观看| 久久欧美一区二区| 欧美v日韩v国产v| 亚洲欧美日韩成人高清在线一区| 一区二区三区国产| 亚洲色图视频免费播放| 亚洲乱码日产精品bd| 亚洲裸体在线观看| 蜜臀国产一区二区三区在线播放| 国产精品美女一区二区| 亚洲电影你懂得| 国产一区日韩二区欧美三区| 色综合欧美在线| 日韩欧美国产午夜精品| 亚洲三级在线免费观看| 日韩激情中文字幕| 99国产精品久久久久| www成人在线观看| 一区二区三区中文字幕精品精品 | 日韩欧美色综合网站| 中文字幕国产一区| 日韩av成人高清| 欧美三级电影一区| 国产欧美精品一区二区三区四区| 日韩精彩视频在线观看| 在线视频一区二区三| 国产日韩欧美一区二区三区乱码 | 亚洲精品国久久99热| 国产成+人+日韩+欧美+亚洲| 欧美日韩综合在线| 国产日产亚洲精品系列| 午夜精品在线看| 欧美探花视频资源| 国产精品国产a| 成人一级片网址| 欧美极品xxx| 国产很黄免费观看久久| 欧美日韩一卡二卡| 1000精品久久久久久久久| 成人黄色网址在线观看| 亚洲国产经典视频| 成人做爰69片免费看网站| 日韩毛片高清在线播放| 色av成人天堂桃色av| 一区二区三区欧美| 日韩美女一区二区三区四区| 国产美女精品一区二区三区| 日本一区二区三区在线观看| 粉嫩蜜臀av国产精品网站| 成人欧美一区二区三区| 欧美日韩国产色站一区二区三区| 日韩中文字幕不卡| 26uuu亚洲综合色欧美| 成人av在线看| 日本aⅴ免费视频一区二区三区| 精品久久久久久最新网址| 91啪九色porn原创视频在线观看| 亚洲综合色噜噜狠狠| 国产婷婷色一区二区三区| 欧洲一区在线电影| 国产精品自拍在线| 性做久久久久久久免费看| 久久久久久免费毛片精品| 91福利视频在线| 播五月开心婷婷综合| 黄色资源网久久资源365| 亚洲欧美经典视频| 国产精品美女久久久久aⅴ国产馆 国产精品美女久久久久av爽李琼 国产精品美女久久久久高潮 | 欧美三区在线视频|