NVIDIA Announces Earnings Of $2.2 Billion For Q2 2018by Brett Howse on August 10, 2017 6:20 PM EST
- Posted in
- Financial Results
NVIDIA announced its earnings this afternoon for the second quarter of their 2018 fiscal year (not a typo). As we’ve seen over the past several quarters, NVIDIA has been growing their business at a very brisk pace, and that growth was reflected in their earnings statement once again. For the second quarter, ending July 30, NVIDIA reported revenues of $2.23 billion, up 56% from a year ago. Gross margin was up half a percent as well to 58.4%. When revenue is up, and margins are up, it should perhaps not be a shock that operating income also jumped, in this case to $688 million, which is up 117% compared to Q2 2017. Net income was $583 million, up 123% year-over-year, and that resulted in earnings per share of $0.92, up 124% from the $0.41 a year ago. Sometimes these large jumps can be attributed to write-downs or other charges in the compared quarter, but in fact Q2 2017 was also a record for the company, after they took a write down charge for the Icera modem division two years ago.
|NVIDIA Q2 2018 Financial Results (GAAP)|
|Revenue (in millions USD)||$2230||$1937||$1428||+15.1%||+56.2%|
|Operating Income (in millions USD)||$688||$554||$317||+24.2%||+117.0%|
NVIDIA’s gaming segment continues to be their largest source of revenue, even as they have diversified the company, and despite the contraction of the PC market, PC gaming still appears to be a strong business, and NVIDIA has taken advantage of that. For the quarter, NVIDIA had gaming revenue of $1.186 billion, compared to $781 million a year ago. They’ve not launched anything that’s completely new this quarter, but are still seeing success with their Pascal based GPUs. This growth can also likely be attributed to mining, but to NVIDIA, a GeForce sale goes in the gaming column.
Professional visualization is likely still one of the higher margin divisions of NVIDIA, even as they’ve seen this group surpassed by several other divisions in the company. The Professional Visualization revenue grew 13.5%, which is actually pretty solid growth, but it can seem a bit diminutive compared to some of the other growth in the company.
Datacenter has quickly become one of NVIDIA’s biggest sources of revenue. A year ago, it accounted for just under 11% of the company’s revenue, but for Q2 2018, revenue is up 175% to $416 million. This once small segment of NVIDIA now accounts for almost 19% of their revenue, and with the acceleration of AI and compute tasks in the datacenter, the company appears to be in a prime position to continue to capitalize on that trend.
Automotive is the segment that emerged out of NVIDIA’s unsuccessful attempt to move into mobile. It continues to grow as well, with NVIDIA signing agreements with many of the largest automotive companies to include their technology in new vehicles. In May of this year, NVIDIA announced that Toyota will utilize their DRIVE PX platform, to join the party with other companies such as Volvo. Revenues for this segment grew 19.3% to $142 million this quarter, compared to Q2 2017.
Finally, NVIDIA’s OEM and IP segment had a big jump in revenue as well, from $163 million a year ago to $251 million today. That’s a 54% increase.
|NVIDIA Quarterly Revenue Comparison (GAAP)|
|OEM & IP||$251||$156||$163||+60.9||+54.0%|
Looking ahead to Q3 2018, NVIDIA sees their record year continuing, with expected revenues of $2.35 billion, plus or minus 2%, and margins between 58.1% and 59.1%.
Source: NVIDIA Investor Relations
Post Your CommentPlease log in or sign up to comment.
View All Comments
frenchy_2001 - Thursday, August 10, 2017 - link1) casual compute don't usually need much fp64, but you do have a point that nv does not serve that market. Titan used to fill that niche, but after kepler, the low fp64 is a hw restriction (gm200 was not a compute chip and similarly, gp102 is visualisation, compute is gp100)
2) Titan lately favor ai, or int8 operations
1mpetuous - Thursday, August 10, 2017 - linkDepends on your application. What I would have liked to see is segmentation by limiting the total number of DP compute cards per system via drivers, rather than pricing even a single Tesla out of reach.
mapesdhs - Friday, August 11, 2017 - linkAs I said above, NV did supply the FP64 market to a decent extent with the 500 series, the 580 being particularly good. But it wasn't just FP64 where CUDA performance took a dive with the next arch, check reviews of the 600 cards, the 580 hammers them for basically everything, likewise the 700s up to the 780. I agree with 1mpetuous, NV could have continued with the Titan-style driver mechanism at the very least, but that wouldn't have helped with the general drop which affects the 600s (note I'm not talking about games here, the 600s are obviously better than the 500s for that).
milli - Friday, August 11, 2017 - linkIt's not like NV could enable more compute power through drivers. NV took out huge chunks of compute power from its GF designs to lower power consumption.
It has been one of the main reasons why AMD has higher power consumption but also has higher compute performance.
HighTech4US - Thursday, August 10, 2017 - linkWhy don't you ask AMD how well that is going for them.
Nvidia (with their crippled compute performance on the mainstream cards) constantly generate positive (and growing) profits while AMD (with non-crippled compute performance on the mainstream cards) hardly makes any profit.
Seems like Nvidia knows how to market their products better than AMD.
TheinsanegamerN - Friday, August 11, 2017 - linkBut he wants that extra stuff for FREE! Why should he have to pay more for a niche product, they should just give it away!
dgingeri doesnt understand how market segments work.
dgingeri - Friday, August 11, 2017 - linkWow, step away and a little comment on their crippling compute performance becomes some major analysis on my ability to comprehend economics and market segments.
Of course I understand that they felt (incorrectly) the compute performance would hurt their Tesla and Quadro sales, and that they felt it was good to use that to differentiate the segments better. Al lot of people don't seem to understand that with the use of PhysX, the full potential of the card can be accessed, and that this compute crippling hurt non-PhysX physics performance as well as a few other potential uses for Cuda for mainstream users, particularly cryptocurrency mining.
I am annoyed by this, yes. It took away potential from my 980Ti, particularly that I can't even consider using my card for cryptocurrency mining in my off time, and any physics games might be able to use are locked down to Nvidia's proprietary PhysX, which costs money to license. AMD users have those other options.
Yes, I do understand the decisions the execs at Nvidia made were done to make them more money. I consider such things, artificially reducing or eliminating features on lower priced items, underhanded tactics, as they can't sell the higher levels of items on legitimate reasons, and elitist, as it artificially relegates those who can't afford the highest end stuff to having inferior products.
Imagine, if you would, if a car maker advertised "V-8 in every car" and yet 4 cylinders are disabled unless you buy the leather seats, 8 speaker sound system, and moonroof, and you'd have to buy the boat trailer, mobile home trailer, and detachable motorcycle to get more than half gas intake capacity. How well would you think of them for that?
That's what I feel Nvidia is doing here.
CiccioB - Friday, August 11, 2017 - linkYou are saying a bunch of BS.
Nvidia has "crippled" their GPUs in HW, not SW. They decided with Fermi that only x00 chip had to have DP units. With maxwell, GM200 didn't have them due to die size issues (GM200 was what had to be GM104 at never developed 20nm HP).
There is not any crippling by driver, as you can use whatever Tesla or Quadro you want and you will get the same cryptocurrency numbers as you have with GeForce. So there's not artificial crippling.
Moreover AMD has not any particular computing advantage in cryotocurrency calculations. With Ethereum, which is the algorithm used to make comparisons, the important thing are memory bandwidth and latencies, and, surprise, a RX580 with the same memory bus as the GTX1070 computes exactly the same.
The fact that you consider AMD faster is just because they sell 580 cards at the price of the 1060, while actually mounting a GPU that is comparable to a 1070 in used resources.
All thus implies that you are just speaking by some hate and not by reason.
So you may understand economics and segmentation, but surely not technology. And if you do, you are just lying.
tipoo - Friday, August 11, 2017 - linkHow would we know?
AMD crippled compute on consumer cards far less, but is doing worse. Nvidia tailored GPUs to games at the expense of compute performance, and more consumers bought in....
dgingeri - Friday, August 11, 2017 - linkAMD as a whole is doing far worse. Their GPU division is doing quite well. Their inability to overtake Nvidia despite superior products is a direct result of their horrible driver writing. (Their drivers are buggy, and the interface on the control panel is poorly laid out and performs poorly. A LOT of people, including me, avoid AMD GPUs specifically because of that.)