Category Archives: Uncategorized

Robotaxi 是邪路

自动驾驶有两条主要路线,一条是由 Waymo 主导的 Robotaxi 路线,只为用户提供点 A 到 点 B 的服务,一上来就是L4,另一条是由 Tesla 主导的造车路线,逐步改进提高。

不看好 Robotaxi 路线的主要原因有:

  1. Market 太小,相对于汽车销售来说。可能有人会说有了 Robotaxi 之后就不用买车了,然而即使是在买车成本 >> 打车成本的中国,汽车销售依然是巨大的市场。所以这一点不成立。
  2. 缺少渐进路线。Robotaxi 必须要达到一个很高的自动化程度才能还是有意义,然而即使是最简单的用超声波雷达做的自适应巡航,对于司机的驾驶感受都是很有意义的。
  3. 采集用于训练的数据量不如造车路线。

相比之下,从造车开始逐步提供辅助驾驶积累数据,到完成L4自动驾驶,再开启 Robotaxi 将会实现对前者的降维打击。

特斯拉和电动三傻(蔚来,理想和小鹏)

最近国产 Model Y 降价,正好趁着这个机会谢谢我对电动车行业的思考。

首先声明,本文完全从一个财务投资者的视角,不讨论产品本身,纯主观观点。

观点1: 汽车的未来在于智能化,智能化的核心在于无人驾驶

无人驾驶车辆对于传统汽车的竞争将会是碾压性的。现在汽车之所以还没有广泛的普及无人驾驶,根本原因在于技术难度过高。在无人驾驶之外,汽车的智能化也已经有所普及,例如 Apple CarPlay 和 Android Auto。但是这些应用层的创新还不够,汽车的智能化的核心就是无人驾驶,将人的注意力解放出来。

观点2: 传统汽车厂商转向困难,必然会被智能化浪潮淘汰

理论上来说,任何一家传统汽车公司都可以制造新智能汽车,然而由于公司治理的原因,大公司通常来说很难有壮士断腕之心来进行转型,而且通常也缺乏转型所需要的人才。虽然传统汽车公司都知道智能化和电动化是方向,但是公司治理体系下转型困难,只能以温水煮青蛙的方式慢慢被时代淘汰。

观点3: 要想成为中国特斯拉,必须要在中国市场战胜特斯拉

现在蔚来,理想和小鹏的股价已经很高,其中必然包含未来成为中国特斯拉的期望。要成为中国特斯拉,必然要和特斯拉进行正面竞争。所以,我完全不同意所谓“蔚来和特斯拉完全不是同类型的车”的观点,不管现在的车型是否有竞争,和特斯拉的竞争在所难免。另外根据观点1,无人驾驶才是智能汽车的核心竞争力,所谓“配置豪华”是一件想做就能做到的事情,竞争对手很容易就可以模仿改进,没有任何壁垒。

那么在蔚来,理想和小鹏之间,我更看好哪家呢?我目前的答案是小鹏,我现在唯一的不确定因素在于小鹏管理层的价值观,还需要时间来观察。

为什么我不看好理想?造车是一件非常困难的事情,在这件事情上必须要专注。理想两头下注必然导致资源和注意力的不集中,表面上看很好,但是要解决困难的事情,必须要集中注意力。

为什么我不看好蔚来?核心问题在于对智能化无人驾驶投入的不够,而且对于错位竞争的期许和对与特斯拉的直接竞争准备不足。

小鹏当然也有它的问题,主要的风险在于 1) 管理层的价值观。之前特斯拉偷代码事件闹的沸沸扬扬,我希望只是个例,而不是管理层价值观不正的缩影 2)过于激进的无人驾驶方案导致事故。

于此同时,我依然认为特斯拉本身也是非常有竞争力的,而且 Elon Musk 如果想要把中国市场做好,那就是一定可以做好的,主要的风险在于对中国市场投入不够,或者对中国区管理层信任不足,不能放权。在中国市场里竞争需要贴身肉搏,中国区管理层权限不足会很难对市场动向产生反馈,在局部里被国产品牌打压。在跨国公司里,这样的事情屡见不鲜。

特斯拉的核心竞争力在于领先对手的无人驾驶,以及规模化生产的成本控制。从某种意义上说,特斯拉之于国产三傻会是比苹果之于华为小米更难对付的存在,因为公开的无人驾驶算法还会不会存在很难预料,加上特斯拉根本不在乎维持“高端形象”,想象一个愿意和国产品牌拼刺刀的苹果,该会是多么的可怕!所以即使是每股价格高达$700的今天,特斯拉也是财务上很值得投资的一个。

从投资的角度看,我会继续下注小鹏和特斯拉,而不是理想和蔚来。这不代表蔚来和理想的股价会跌,因为智能汽车的市场会越来越大,但是小鹏和特斯拉会是更有可能的企业。

今天写下这篇文章,希望以后不会被打脸。

侃侃 Smallfoot

https://movie.douban.com/subject/26944582/

最近在飞机上看了这部动画片,看完真的有种不吐不快的感觉。也许对很多人来说,动画片只是给孩子看的而已,但是 Smallfoot 真的饱含了导演和编剧的很多想法,通过精巧的剧情设计传达给荧幕前的我们,这样正能量的精神内核,让我感慨不已。

故事的剧情很简单,一群住在喜马拉雅山脉的野人,因为一个偶然的关系看到了一个人类,野人和人类开始了互相的试探,并在最后开始试着了解和接纳彼此。

我看完之后最感动的地方在于三点:

  1. 鼓励孩子的求知欲和探索欲,不迷信权威而要努力去寻找真知
  2. 要坚守初心,不要投机取巧走捷径而忘了你到底为什么出发的
  3. 如果你是多数群体,要接纳和了解不一样的人;如果你属于少数群体,也要勇敢走出柜子,不要将自己隐藏起来

这样的精神内核,通过动画片传递出来,不得不感叹导演和编剧的功力深厚。这是一部有爱,有笑又有深度的动画片,强烈推荐。

开100个线程下百度云!

最近有点东西要下载,找来找去只有百度云有。试了百度云的客户端,只能跑到70kbps,这TM下到猴年马月!

研究了一下,最终还是找到了解决办法,成功通过开100个线程的方式,把下载速度干到了7Mbps。总结如下:

1. 装直链脚本,地址在这里:https://greasyfork.org/en/scripts/39776-%E7%99%BE%E5%BA%A6%E7%BD%91%E7%9B%98%E7%9B%B4%E6%8E%A5%E4%B8%8B%E8%BD%BD%E5%8A%A9%E6%89%8B%E4%BF%AE%E6%94%B9%E7%89%88

2. Mac 实在没有什么好的下载工具,找来找去还是 axel 最简单。Homebrew 装 axel,没有 Homebrew 的话先到这里装上: https://brew.sh/

然后

3. 最后一步,开 100 个线程下!

先用直链脚本弄到下载地址,然后

效果感人

感觉应该可以开到200个线程跑到上限的,不过差不多够了

SAE死了, 精神上彻底死了

大概是一个月之前 SAE 又改收费规则了, 结果就是所有使用数据库的应用都要交一个所谓的 MySQL 租金, 我账户里原先盘算的能用两三年的余额瞬间没了.

现在来看两年前把博客从 SAE 迁出真是一个非常明智的决定, 现在可以彻底告别 SAE 了.

Screen Shot 2016-04-10 at 11.57.25

“Multiple dex files define” Error in Android development Caused by IntelliJ IDEA bug.

Recently I encountered a “Multiple Dex Files Define” error when writing and building Android libraries. After a lot of work, the root reason is attributed to a IntelliJ IDEA bug which appears when you are using .classpath(Eclipse style) project configuration file format. I reported this bug to Jetbrains and they had confirmed my report on it. It is now being tracked in their bug system. Before Jetbrains fix the bug, you can avoid it by using .iml(IntelliJ style) project configuration file format.

Here is how I encountered and reproduced this bug, which is also available through https://youtrack.jetbrains.com/issue/IDEA-144038.

When exporting jars, the same configuration will have different output behaviors using different config file style (.classpath/.iml)

How to reproduce:

Suppose here is my project structure:
RootAndroidApplication
AndroidLibraryModuleA
AndroidLibraryModuleB

AndroidLibraryModuleA depends on AndroidLibraryModuleB, RootAndroidApplication depends on both AndroidLibraryModuleA and AndroidLibraryModuleB.
Now we want to export AndroidLibraryModuleA and AndroidLibraryModuleB as jar without resources. In other words, we are only using the Java code part.

In artifacts settings, we add to jar configurations. For AndroidLibraryModuleA we only include AndroidLibraryModuleA compile output. For AndroidLibraryModuleB we only include AndroidLibraryModuleB compile output.

In AndroidLibraryModuleA we change the compile level of dependency AndroidLibraryModuleB to “provided”. Then we build the artifacts of AndroidLibraryModuleA.

If we open the jar file exported, we can see only compiled classes from AndroidLibraryModuleA is listed here.

However, if we keep everything unchanged, just switch the project file format from IntelliJ’s .iml to Eclipse’s .classpath of AndroidLibraryModuleA. Then we rebuild the artifacts.

Then the compiled classes from AndroidLibraryModuleB is listed in AndroidLibraryModuleA’s jar file.

I believe the .iml’s behavior is the intended one while the .classpath one’s behavior may result from a bug.

This difference will lead to a “Multiple dex files define” error when AndroidLibraryModuleA.jar and AndroidLibraryModuleB.jar are added to another project as jar dependency as there are duplicate class files in two jars.

Use HAProxy to load balance 300k concurrent tcp socket connections: Port Exhaustion, Keep-alive and others

I’m trying to build up a push system recently. To increase the scalability of the system, the best practice is to make each connection as stateless as possible. Therefore when bottleneck appears, the capacity of the whole system can be easily expanded by adding more machines. Speaking of load balancing and reverse proxying, Nginx is probably the most famous and acknowledged one. However, TCP proxying is a rather recent thing. Nginx introduced TCP load balancing and reverse proxying from v1.9, which is released in late May this year with a lot of missing features. On the other hand, HAProxy, as the pioneer of TCP loading balacing, is rather mature and stable. I chose to use HAProxy to build up the system and eventually I reached a result of 300k concurrent tcp socket connections. I could have achieved a higher number if it were not for my rather outdated client PC.

Step 1. Tuning the Linux system

300k concurrent connection is not a easy job for even the high end server PC. To begin with, we need to tune the linux kernel configuration to make the most use of our server.

File Descriptors

Since sockets are considered equivalent to files from the system perspective, the default file descriptors limit is rather small for our 300k target. Modify /etc/sysctl.conf to add the following lines:

These lines increase the total file descriptors’ number to 1 million.
Next, modify /etc/security/limits.conf to add the following lines:

If you are a non-root user, the first two lines should do the job. However, if you are running HAProxy as root user, you need to claim that for root user explicitly.

TCP Buffer

Holding such a huge number of connections costs a lot of memory. To reduce memory use, modify /etc/sysctl.conf to add the following lines.

Step 2. Tuning HAProxy

Upon finishing tuning Linux kernel, we need to tune HAProxy to better fit our requirements.

Increase Max Connections

In HAProxy, there is a “max connection cap” both globally and backend specifically. In order to increase the cap, we need to add a line of configuration under the global scope.

Then we add the same line to our backend scope, which makes our backend look like this:

Tuning Timeout

By default, HAProxy will detect dead connections and close inactive ones. However,  the default keepalive threshold is too low and when applied to a circumstance where connections have to be kept in a long-pulling way. From my client side, my long socket connection to the push server is always closed by HAProxy as the heartbeat is 4 minutes in my client implementation. Heartbeat that is too frequent is a heavy burden for both client (actually android device) and server. To increase this limit, add the following lines to your backend. By default these numbers are all in milliseconds.

Configuring Source IP to solve port exhaustion

When you are facing simultaneous 30k connections, you will encounter the problem of “port exhaustion”. It is resulted from the fact that each reverse proxied connection will  occupy an available port of a local IP. The default IP range that is available for outgoing connections is around 30k~60k. In other words, we only have 30k ports available for one IP. This is not enough. We can increase this range by modify /etc/sysctl.conf to add the following line.

But this does not solve the root problem, we will still run out of ports when the 60k cap is reached.

The ultimate solution to this port exhaustion issue is to increase the number of available IPs. First of all, we bind a new IP to a new virtual network interface.

This command bind a intranet address to a virtual network interface eth0:1 whose hardware interface is eth0. This command can be executed several times to add arbitrary number of virtual network interfaces. Just remember that the IP should be in the same sub-network of your real application server. In other words, you cannot have any kind of NAT service in your link between HAProxy and application server. Otherwise, this will not work.

Next, we need to config HAProxy to use these fresh IPs. There is a source command that can be used either in a backend scope or as a argument of server command. In our experiment, the backend scope one doesn’t seem to work, so we chose the argument one. This is how HAProxy config file looks like.

Here is the trick, you need to declare them in multiple entries and give them different app names. If you set the same app name for all four entries, the HAProxy will just not work. If you can have a look at the output of HAProxy status report, you will see that even though these entries has the same backend address, HAProxy still treats them as different apps.

That’s all for the configuration! Now your HAProxy should be able to handle over 300k concurrent TCP connections, just as mine.

IntelliJ / WebStorm slow debugging in Node.js

I recently experienced a severe slow debugging experience in IntelliJ + nodejs plugin / WebStorm, which made me to wait nearly one minute for my app to start. I tried to figured out why, and I noticed that the most of the time was spent on loading various packages.

Later on I found the cause for such slowness: the IDE’s break on exception option is enabled. In other words, the IDE will try catch almost every line of JavaScript code, no matter it is written by you or it is from a third party package, which leads to a huge performance loss.

Disabling it by navigating through menu ‘RUN -> View Breakpoints…’ and toggle ‘JavaScript Exception Breakpoints’. You will have your program debugging much faster. To further accelerate your experience, navigate through menu ‘Help -> Find Action…’, type in ‘Registry’ and enter. Uncheck ‘js.debugger.v8.use.any.breakpoint’.

Now your nodejs program should run in debug mode as fast as it is not.

说说《大圣归来》

其实本来我是不打算看这部电影的,刚好公司组织一起去看,原价35的3D票只要10块钱,算是个福利,似乎网上一片称赞,索性就看一看。我最近刚把 GTA 5 打通,一篇很长的评论还没写完,就先来说说这个《大圣归来》。

首先得感谢这电影票只花了10块钱,要是35买的看完我肯定觉得不值。接下来一样样的评。

画面:很好。这部电影在画面上的突破,相对于其它国产动画还是很明显的。不夸张的说,这样的画面素质,放到迪士尼的那一堆3D动画片里也是不算差的。毛发的细节,光照,物理效果,都很好。可惜的是,画面好也就止步于此了,这么好的画面没有用来营造气氛渲染情感,妖洞没有妖气,那个关大圣的水晶宫其实很漂亮可以多给点特写但是稍微摆弄了两下就过去了,没有物尽其用。

剧情:弱爆了。一个好的剧情,要么是讲一个大家都没听过的故事,要么是讲一个大家以为都听过实际上走向出乎意料的故事。前者是盗梦空间,后者是冰雪奇缘。大圣的故事,看到开头猜到结尾,结果导演连个结尾都不让我们看,连个完整的故事都不讲完故意吊着你,不想多说了。

人物:既然片名叫大圣归来,那么我想主角肯定是大圣吧,其它什么江流儿老师傅猪八戒应该都是配角才对吧?可是塑造出来了一个什么样的大圣形象呢?恕我眼拙看不出来,因为封印没解掉垂头丧气,因为看到江流儿被妖怪打而变身奥特曼,体现出大圣是一个爱护儿童的好猴子?可是论爱护儿童,我觉得师傅才是真·爱啊,人家没有特技都敢入妖穴,比大圣这种有特技的高到不知道哪里去了。

音乐:那个汪峰的歌一出来就彻底出戏了。现在抓妖精、救女孩的事情一个没做出这么燃的BGM是要怎样?好在其它的配乐分寸把握的都还好,至少不是负分。

总体而言,个人觉得这种某个方面有明显瑕疵的电影评分在7到8之间算是一个正常的评价。大圣归来,并没有好到值得让你专门抽两个小时的时间,认认真真欣赏的水平;如果大圣归来是迪士尼出的,我会毫不客气的把他划到流水线片、爆米花片的类别里去。

一个由于Nginx配置不当导致的启动失败: Stopping System V runlevel compatibility

最近在做Android Push系统的服务器端, 要用到1.9版Nginx引入的TCP代理功能, 由于Nginx默认的连接数太少, 我就按照之前改内核参数的习惯, 直接大手一挥直接把连接数加到了1000W. reload配置之后, 我的机器死掉了.

我当时根本没想到是Nginx的原因, 下意识的认为是我用的那个MQTT的库一定是泄露内存了, 然后果断重启机器.

然后就起不来了, 在Ubuntu的启动界面一直转啊转. 再次重启, 进 Recovery Mode 打日志, 发现卡在 Stopping System V runlevel compatibility [OK] 这里.

网上几乎一边倒的认为是 NVidia 的显卡驱动问题, 虽然我觉得不太可能是这个原因不过网上都这么说, 那就卸了吧.

卸了之后还是进不去系统, 而且还是卡在老地方! 于是开始在笔记本上查资料, 机器就放在那里没管. 过了几分钟之后, 我瞟了一眼机器, 出了一行 log, Out of Memory Error, Kill Nginx. 这时候我才意识到到是不是和我之前改了 Nginx 配置有关系. 很快我就确定是 Nginx 的问题, 因为 Nginx 用的是连接池, 即使没有连接也会预先创建好一定数量的备用, 我的机器大约8G内存, 之前的测试中大约能抗住80W左右的连接, 1000W的连接池必定导致OOM, 然后Nginx就吃光了所有的内存, 强迫操作系统不断进行垃圾回收, 导致启动卡死.

接下来就很简单了, 把Nginx的连接数改回去, 再把显卡驱动装上, 成功进入系统, 然后再给Nginx设置一个合适的连接池大小, 继续进行试验.