SAE死了, 精神上彻底死了

大概是一个月之前 SAE 又改收费规则了, 结果就是所有使用数据库的应用都要交一个所谓的 MySQL 租金, 我账户里原先盘算的能用两三年的余额瞬间没了.

现在来看两年前把博客从 SAE 迁出真是一个非常明智的决定, 现在可以彻底告别 SAE 了.

Screen Shot 2016-04-10 at 11.57.25

Introducing No Wakelock

Hi, No Wakelock is a new Android app I developed. It gives user the ability to disable wakelocks of specified apps, which are usually the root cause of battery drain. It requires Xposed to function normally.

How it works?

Android allows apps to use partial WAKE_LOCK to keep devices awake while screen is off. However, this mechanism is often abused as some Android developers introduced it into network related operations. As a matter of fact, network events will wake devices up automatically and only pure CPU operations require wakelocks to prevent device from falling asleep.

Disabling partial WAKE_LOCK is usually safe and has little impact on the functionality of Android apps, except if you are doing CPU intensive work like video rendering, π calculation etc.

How to use it?

First of all, enable the Xposed module.

Screenshot_2015-10-01-09-40-52

It is recommended to use No Wakelock with other apps like Greenify. If you do not wish to have an app running in the background while screen is off, simply greenify it. If you have an app that you wish to have it running in the background, but at the same time want to minimize its battery usage, then do not greenify it. Instead, use No Wakelock to restrict its access to wakelocks.

It is recommended to identify the apps that use excess wakelocks first. Tools that can help you with that include Wakelock Detector.

Screenshot_2015-10-01-08-51-56

Then open No Wakelock, locate the app you want to disable.

Screenshot_2015-10-01-09-08-38

Then, choose the types of wakelocks you want to disable.

What to disable?

Partial Wakelock: This is the wakelock that prevents your CPU from falling into sleep while screen is off.

All Other Wakelocks: This is the wakelock that prevents your screen from turning off.

Sync Adapters: Sync Adapters can also keep devices awake. If you do not need synchronisation, you can disable it.

Align AlarmManager Wake-ups: (>= Android 4.4 only) Use this option to force align all wakeups caused by AlarmManager so that the CPU can keep asleep for as long as possible. Please be reminded that this option might postpone or break push notifications of apps that are improperly designed.

These four options should be enough for 99% of the users. However, if you wish to have more precise control over your phone’s wakelocks, you can enable this option:

Apply Custom Black/Whitelist: This is an advance option. Common users usually do not need to touch this unless you are clear what you are doing. If you wish to enable it, please edit the custom black/whitelist first. For more information about black/whitelist, read the next session.

  • Setup Example: Google Play Services

If you are using Google Play Services, it may be consuming too much battery. So you checked the battery usage of Google Play Services, and it turned out that Google Play Services is keeping your device awake even when you are not using it. To save your battery, open “No Wakelock”, navigate to Google Play Services (Enable system apps first in No Wakelock settings), disable partial wakelock and leave everything unchanged. Restart your device to make this effective.

Google Play Services will now no longer consume too much battery. The best part is that GCM notifications & Google Account Sync are all working as normal. Woohoo!

Screenshot_2015-10-01-09-08-48

Force stop the app(if you only change the settings of one app) or reboot your devices(if you changed the settings of a lot of apps) to make all settings effective.

Screenshot_2016-02-14-00-06-48

That’s all. Your device can have a good night’s sleep now.

Advanced Option: Black/WhiteList

Please be reminded that this is only for advanced users.

If you decide to enable it, the priority of wakelock matching becomes: blacklist > whitelist > other settings for the app.

To edit your black/whitelist, click the “Edit” button on the top right and fill in the black/whitelist in corresponding columns.

Black/Whitelist works in the way that matches wakelocks’ names. You can write regular expressions on each line. For instance, if you saw a wakelock named WakeLock:12345 & a wakelock named WakeLock:abcde are keeping your device awake, you can fill in these content in your blacklist:

WakeLock:\d+
WakeLock:[a-zA-Z]+

Please be reminded that one and only one regular expression should appear on each line. Do not insert extra new lines as this will invalidate all settings for this app.

Get it now!

Download No Wakelock at Google Play: https://play.google.com/store/apps/details?id=com.linangran.nowakelock

Purchase Donation Pack at Google Play: https://play.google.com/store/apps/details?id=com.linangran.nowakelock.donation

关于禁止唤醒

禁止唤醒是我开发的一款应用, 它允许用户禁用掉特定应用的唤醒锁. 可以在不影响推送的情况下, 大大降低应用的耗电量. 唤醒锁在 Android 中的滥用十分普遍, 特别是在国内, 各路应用纷纷使用唤醒锁来上传下载资料, 既泄露隐私又耗费电量.

禁止唤醒需要 Xposed 模块以正常工作.

省电原理

Android 系统允许应用使用CPU唤醒锁来使设备在息屏时保持唤醒状态. 然而, 这个机制在国内 Android 应用中的滥用十分普遍. 很多 Android 开发者错误的在网络通信中加入唤醒锁, 然而实际上, 这种做法除了白白消耗电池之外没有任何作用.

禁用唤醒锁通常不会有什么副作用, 对应用本身的功能影响也十分有限. 正确设置”禁止唤醒”可以在不影响微信等消息推送的前提下节省大量后台电量消耗.

使用方法

首先, 启用 Xposed 模块.

Screenshot_2015-10-01-09-21-53

禁止唤醒主要用于应对那些你希望在后台运行的应用. 对于那些不需要在后台运行的应用, 推荐使用绿色守护直接干掉. 对于那些你希望在后台运行, 但是又特别耗电的应用, 使用禁止唤醒来限制它们使用CPU唤醒锁, 以让设备可以正常休眠.

开始之前, 建议先使用 Wakelock Detector 之类的应用查看过度使用唤醒锁的应用.

Screenshot_2015-10-01-08-51-56

然后打开禁止唤醒, 找到你希望禁止的应用.

Screenshot_2015-10-01-09-22-21

然后, 选择你希望禁用的唤醒类型.

如何设置?

CPU唤醒锁: 这是在息屏后阻止你的设备进入休眠状态的唤醒锁, 禁用它通常不会有任何问题.

所有其它唤醒锁: 除了CPU唤醒锁之外, 还有一些唤醒锁可以阻止设备休眠, 甚至阻止设备息屏. 开启这个选项以禁用这些唤醒锁.

同步: 同步也可以唤醒设备, 如果你不需要应用的同步功能, 使用这个选项来禁用掉它.

对齐定时器: (>= Android 4.4) AlarmManager可以使用定时器来周期性的唤醒设备, 阻止CPU进入长期休眠状态. 启用这个选项来强制对齐定时器, 让它们尽量在同一时间触发以节省电量. 请注意: 对于设计不良的应用, 启用此选项有可能会引发推送消息延迟.

99%的用户使用以上四项即可完美控制好应用的唤醒问题. 然而, 如果你希望对禁止唤醒的唤醒锁管理有更精细的控制, 您可以启用以下选项.

应用自定义黑/白名单: 这是一个高级选项, 如果您确定您要使用黑/白名单机制, 请先通过右上角的编辑按钮设置黑白名单. 了解更多黑/白名单信息, 请看下一节.

  • 设置示例: 微信

微信在后台运行时消耗大量电量, 在电池设置中查看微信的耗电量, 发现微信在设备息屏时保持设备唤醒而无法进入休眠状态. 要减少微信的电量消耗, 打开禁止唤醒, 找到微信, 禁用CPU唤醒锁和同步, 保持所有其它选项不更改, 重启设备以应用更改.

微信不会再消耗大量的电量. 更好的是, 微信的推送一如既往的及时准确, 没有任何功能被破坏.

Screenshot_2015-10-01-09-22-28

强行停止对应的应用(如果你只修改了一个应用的设置)或者重启设备(如果你更改了大量应用的设置)以让更改生效.

Screenshot_2016-02-14-00-08-14

就是这样了! 你的设备现在可以好好的”睡一觉”了.

高级设置: 黑/白名单

请注意: 此项设置仅供高级用户使用.

当您针对某个应用启用黑/白名单功能后, 唤醒锁匹配的优先级为 黑名单 > 白名单 > 您的其他设置.

要编辑黑/白名单, 请点击右上角的编辑按钮, 然后在黑名单唤醒锁和白名单唤醒锁两栏下, 分别填入您的黑/白名单信息.

黑/白名单针对唤醒锁的名称进行过滤, 每行一个, 支持正则表达式. 例如, 您看到某个应用正在使用名为 WakeLock:12345 的唤醒锁和名为 WakeLock:abcde 的唤醒锁唤醒设备, 您可以这样填入正则表达式:

WakeLock:\d+
WakeLock:[a-zA-Z]+

请注意, 正则表达式每行有且仅有一个. 请不要插入多余的空行, 这会导致针对此应用的设置完全失效.

立即下载

在 Google Play 下载禁止唤醒: https://play.google.com/store/apps/details?id=com.linangran.nowakelock

在 Google Play 购买捐赠包: https://play.google.com/store/apps/details?id=com.linangran.nowakelock.donation

“Multiple dex files define” Error in Android development Caused by IntelliJ IDEA bug.

Recently I encountered a “Multiple Dex Files Define” error when writing and building Android libraries. After a lot of work, the root reason is attributed to a IntelliJ IDEA bug which appears when you are using .classpath(Eclipse style) project configuration file format. I reported this bug to Jetbrains and they had confirmed my report on it. It is now being tracked in their bug system. Before Jetbrains fix the bug, you can avoid it by using .iml(IntelliJ style) project configuration file format.

Here is how I encountered and reproduced this bug, which is also available through https://youtrack.jetbrains.com/issue/IDEA-144038.

When exporting jars, the same configuration will have different output behaviors using different config file style (.classpath/.iml)

How to reproduce:

Suppose here is my project structure:
RootAndroidApplication
AndroidLibraryModuleA
AndroidLibraryModuleB

AndroidLibraryModuleA depends on AndroidLibraryModuleB, RootAndroidApplication depends on both AndroidLibraryModuleA and AndroidLibraryModuleB.
Now we want to export AndroidLibraryModuleA and AndroidLibraryModuleB as jar without resources. In other words, we are only using the Java code part.

In artifacts settings, we add to jar configurations. For AndroidLibraryModuleA we only include AndroidLibraryModuleA compile output. For AndroidLibraryModuleB we only include AndroidLibraryModuleB compile output.

In AndroidLibraryModuleA we change the compile level of dependency AndroidLibraryModuleB to “provided”. Then we build the artifacts of AndroidLibraryModuleA.

If we open the jar file exported, we can see only compiled classes from AndroidLibraryModuleA is listed here.

However, if we keep everything unchanged, just switch the project file format from IntelliJ’s .iml to Eclipse’s .classpath of AndroidLibraryModuleA. Then we rebuild the artifacts.

Then the compiled classes from AndroidLibraryModuleB is listed in AndroidLibraryModuleA’s jar file.

I believe the .iml’s behavior is the intended one while the .classpath one’s behavior may result from a bug.

This difference will lead to a “Multiple dex files define” error when AndroidLibraryModuleA.jar and AndroidLibraryModuleB.jar are added to another project as jar dependency as there are duplicate class files in two jars.

Use HAProxy to load balance 300k concurrent tcp socket connections: Port Exhaustion, Keep-alive and others

I’m trying to build up a push system recently. To increase the scalability of the system, the best practice is to make each connection as stateless as possible. Therefore when bottleneck appears, the capacity of the whole system can be easily expanded by adding more machines. Speaking of load balancing and reverse proxying, Nginx is probably the most famous and acknowledged one. However, TCP proxying is a rather recent thing. Nginx introduced TCP load balancing and reverse proxying from v1.9, which is released in late May this year with a lot of missing features. On the other hand, HAProxy, as the pioneer of TCP loading balacing, is rather mature and stable. I chose to use HAProxy to build up the system and eventually I reached a result of 300k concurrent tcp socket connections. I could have achieved a higher number if it were not for my rather outdated client PC.

Step 1. Tuning the Linux system

300k concurrent connection is not a easy job for even the high end server PC. To begin with, we need to tune the linux kernel configuration to make the most use of our server.

File Descriptors

Since sockets are considered equivalent to files from the system perspective, the default file descriptors limit is rather small for our 300k target. Modify /etc/sysctl.conf to add the following lines:

These lines increase the total file descriptors’ number to 1 million.
Next, modify /etc/security/limits.conf to add the following lines:

If you are a non-root user, the first two lines should do the job. However, if you are running HAProxy as root user, you need to claim that for root user explicitly.

TCP Buffer

Holding such a huge number of connections costs a lot of memory. To reduce memory use, modify /etc/sysctl.conf to add the following lines.

Step 2. Tuning HAProxy

Upon finishing tuning Linux kernel, we need to tune HAProxy to better fit our requirements.

Increase Max Connections

In HAProxy, there is a “max connection cap” both globally and backend specifically. In order to increase the cap, we need to add a line of configuration under the global scope.

Then we add the same line to our backend scope, which makes our backend look like this:

Tuning Timeout

By default, HAProxy will detect dead connections and close inactive ones. However,  the default keepalive threshold is too low and when applied to a circumstance where connections have to be kept in a long-pulling way. From my client side, my long socket connection to the push server is always closed by HAProxy as the heartbeat is 4 minutes in my client implementation. Heartbeat that is too frequent is a heavy burden for both client (actually android device) and server. To increase this limit, add the following lines to your backend. By default these numbers are all in milliseconds.

Configuring Source IP to solve port exhaustion

When you are facing simultaneous 30k connections, you will encounter the problem of “port exhaustion”. It is resulted from the fact that each reverse proxied connection will  occupy an available port of a local IP. The default IP range that is available for outgoing connections is around 30k~60k. In other words, we only have 30k ports available for one IP. This is not enough. We can increase this range by modify /etc/sysctl.conf to add the following line.

But this does not solve the root problem, we will still run out of ports when the 60k cap is reached.

The ultimate solution to this port exhaustion issue is to increase the number of available IPs. First of all, we bind a new IP to a new virtual network interface.

This command bind a intranet address to a virtual network interface eth0:1 whose hardware interface is eth0. This command can be executed several times to add arbitrary number of virtual network interfaces. Just remember that the IP should be in the same sub-network of your real application server. In other words, you cannot have any kind of NAT service in your link between HAProxy and application server. Otherwise, this will not work.

Next, we need to config HAProxy to use these fresh IPs. There is a source command that can be used either in a backend scope or as a argument of server command. In our experiment, the backend scope one doesn’t seem to work, so we chose the argument one. This is how HAProxy config file looks like.

Here is the trick, you need to declare them in multiple entries and give them different app names. If you set the same app name for all four entries, the HAProxy will just not work. If you can have a look at the output of HAProxy status report, you will see that even though these entries has the same backend address, HAProxy still treats them as different apps.

That’s all for the configuration! Now your HAProxy should be able to handle over 300k concurrent TCP connections, just as mine.

IntelliJ / WebStorm slow debugging in Node.js

I recently experienced a severe slow debugging experience in IntelliJ + nodejs plugin / WebStorm, which made me to wait nearly one minute for my app to start. I tried to figured out why, and I noticed that the most of the time was spent on loading various packages.

Later on I found the cause for such slowness: the IDE’s break on exception option is enabled. In other words, the IDE will try catch almost every line of JavaScript code, no matter it is written by you or it is from a third party package, which leads to a huge performance loss.

Disabling it by navigating through menu ‘RUN -> View Breakpoints…’ and toggle ‘JavaScript Exception Breakpoints’. You will have your program debugging much faster. To further accelerate your experience, navigate through menu ‘Help -> Find Action…’, type in ‘Registry’ and enter. Uncheck ‘js.debugger.v8.use.any.breakpoint’.

Now your nodejs program should run in debug mode as fast as it is not.

说说《大圣归来》

其实本来我是不打算看这部电影的,刚好公司组织一起去看,原价35的3D票只要10块钱,算是个福利,似乎网上一片称赞,索性就看一看。我最近刚把 GTA 5 打通,一篇很长的评论还没写完,就先来说说这个《大圣归来》。

首先得感谢这电影票只花了10块钱,要是35买的看完我肯定觉得不值。接下来一样样的评。

画面:很好。这部电影在画面上的突破,相对于其它国产动画还是很明显的。不夸张的说,这样的画面素质,放到迪士尼的那一堆3D动画片里也是不算差的。毛发的细节,光照,物理效果,都很好。可惜的是,画面好也就止步于此了,这么好的画面没有用来营造气氛渲染情感,妖洞没有妖气,那个关大圣的水晶宫其实很漂亮可以多给点特写但是稍微摆弄了两下就过去了,没有物尽其用。

剧情:弱爆了。一个好的剧情,要么是讲一个大家都没听过的故事,要么是讲一个大家以为都听过实际上走向出乎意料的故事。前者是盗梦空间,后者是冰雪奇缘。大圣的故事,看到开头猜到结尾,结果导演连个结尾都不让我们看,连个完整的故事都不讲完故意吊着你,不想多说了。

人物:既然片名叫大圣归来,那么我想主角肯定是大圣吧,其它什么江流儿老师傅猪八戒应该都是配角才对吧?可是塑造出来了一个什么样的大圣形象呢?恕我眼拙看不出来,因为封印没解掉垂头丧气,因为看到江流儿被妖怪打而变身奥特曼,体现出大圣是一个爱护儿童的好猴子?可是论爱护儿童,我觉得师傅才是真·爱啊,人家没有特技都敢入妖穴,比大圣这种有特技的高到不知道哪里去了。

音乐:那个汪峰的歌一出来就彻底出戏了。现在抓妖精、救女孩的事情一个没做出这么燃的BGM是要怎样?好在其它的配乐分寸把握的都还好,至少不是负分。

总体而言,个人觉得这种某个方面有明显瑕疵的电影评分在7到8之间算是一个正常的评价。大圣归来,并没有好到值得让你专门抽两个小时的时间,认认真真欣赏的水平;如果大圣归来是迪士尼出的,我会毫不客气的把他划到流水线片、爆米花片的类别里去。

一个由于Nginx配置不当导致的启动失败: Stopping System V runlevel compatibility

最近在做Android Push系统的服务器端, 要用到1.9版Nginx引入的TCP代理功能, 由于Nginx默认的连接数太少, 我就按照之前改内核参数的习惯, 直接大手一挥直接把连接数加到了1000W. reload配置之后, 我的机器死掉了.

我当时根本没想到是Nginx的原因, 下意识的认为是我用的那个MQTT的库一定是泄露内存了, 然后果断重启机器.

然后就起不来了, 在Ubuntu的启动界面一直转啊转. 再次重启, 进 Recovery Mode 打日志, 发现卡在 Stopping System V runlevel compatibility [OK] 这里.

网上几乎一边倒的认为是 NVidia 的显卡驱动问题, 虽然我觉得不太可能是这个原因不过网上都这么说, 那就卸了吧.

卸了之后还是进不去系统, 而且还是卡在老地方! 于是开始在笔记本上查资料, 机器就放在那里没管. 过了几分钟之后, 我瞟了一眼机器, 出了一行 log, Out of Memory Error, Kill Nginx. 这时候我才意识到到是不是和我之前改了 Nginx 配置有关系. 很快我就确定是 Nginx 的问题, 因为 Nginx 用的是连接池, 即使没有连接也会预先创建好一定数量的备用, 我的机器大约8G内存, 之前的测试中大约能抗住80W左右的连接, 1000W的连接池必定导致OOM, 然后Nginx就吃光了所有的内存, 强迫操作系统不断进行垃圾回收, 导致启动卡死.

接下来就很简单了, 把Nginx的连接数改回去, 再把显卡驱动装上, 成功进入系统, 然后再给Nginx设置一个合适的连接池大小, 继续进行试验.

Closures in different languages

In most scripting languages, there are first-class functions. In short, first-class functions refer to functions that can serve as call arguments, work in expressions and be assigned to variables.

So what is a closure? A closure is a function that brings context information with it. Among all the languages, JavaScript is probably the language where closure is mostly frequently used. In my opinion, the reason why closures are so widely used in Javascript lies in that Javascript does not have a mature OO system compared to other programming languages.

The output of this piece of Javascript code is “2” and “3”. As we can see, function a() returned another function. However, this function not only carries the information about itself, but also the context it lies in, i.e. the value of variable t.

In Apple’s programming language Swift, we have similar closures that act almost identical to Javascript’s.

The result is also the same as Javascript’s, “2” and “3”. What about Python?

There are a few notable differences here. First of all, the variable t defined in a() is not visible to b(), thus it must be passed through named argument. Secondly, the output is “2” and “2”. This is reasonable since the value of t is passed through an argument and changing the value of the argument will not affect its original value. However, python do support “real” closures. The trick is to change the t into a nonlocal variable.

Now we are finally there. Next up, Groovy.

Since in Groovy, named functions cannot be defined in another function, we can only use unnamed function to do this. The output is “2” and “3”.

In Java, functions are not first-class members, thus we will never have terms like closure. A workaround for this is to use anonymous class. Here is an example.

Rule No.1 for anonymous class is that you cannot change the value of the variables that exist in the stack context. In other words, the variables on the stack are all “final” to the inner class. If one want’s to change the value of it, it must be declared as a field of a Class, which will be stored in heap.

Improve ListView Performance on Android

The performance of ListView on Android is sometimes a disaster when it comes to very complex list. Things become more frustrating when you are working with other things with Android like network images and dynamic loading.  The best example of a complex ListView is the Facebook feed in the Android app. They posted an article to show how complex but smooth ListView can be achieved at the same time.

https://code.facebook.com/posts/879498888759525/fast-rendering-news-feed-on-android/

In short, they split each post in the feed into several parts: the header, the main body and the action panel. Then each part will be able to be reused when rendering the ListView. This is a very clever alternative solution to the problematic ListView performance.

However, for my case https://github.com/cfan8/TGFC things are more complicated since the content of a post does not have a fixed style. It may be pure text, or text with some decorations, or full of images without a single line of text. Moreover, the content of the post should be interactive, i.e., when you click on a link or an image, the app should respond with different actions with that click.

In a previous open source Android app that I contributed to, we tried at least two options.

  1. Use a ListView with WebView, i.e., each item in the ListView is a WebView. In this case, it is easy to interact with other parts of the app while at the same time achieving high dynamic usability. Everything works fine when it is with Android 4.2 or before. Performance becomes a really big issue when it comes to Android 4.4, in which Google made the webkit kernel more functional but also heavier. Creating a WebView becomes a really time intensive task which we cannot afford. Thus we got several other workarounds for this problem.
    1. Keep as many WebViews as possible in the memory as cache. In the case that a user scroll down and up a WebView, the WebViews that are cached in the memory can be used directly. To keep the memory use to an acceptable level, we can use Soft Reference to cache each object.
      This workaround does not work well since when you scroll down the WebView, you are still creating new WebViews and it will only work when you scroll up and down, which is actually not very useful.
    2. Reuse each WebView. This does improved the scrolling experience since we no longer create a lot of WebViews. Instead, we alter the content of each WebView. The experience is still a little bit laggy since rendering HTML also takes a lot of time.
      This workaround worked better than the 1st one but it brings another big issue, i.e., when reusing the WebView, the height of it will not change when the length of its content changes. In other words, we will see a lot of blanks in the ListView when the length of each content of WebView varies significantly.
  2. Use a ScrollView with WebViews and render a page of posts at once. This is very brute but surprisingly work! The disadvantage of this solution is, firstly it is very memory consuming since the whole page of posts live inside the main memory. Secondly the app may froze for a second or two while rendering the page, depending on how complex the page is. However, once the page is rendered, it becomes super smooth no matter how you scroll it!

When I was figuring out the solution for the new app TGFC, I was thinking about what kind of solution I should implement. However, I realized that a WebView may not be the only option for my scenario since actually I don’t really need all the features that a heavy WebView provides. I want to have my app to be able to show some different styles of text, several images and that’s all. I don’t need stuffs like z-index or absolute positioning. In my case, TextView can work perfectly to meet my demands.

I started with a ListView of TextViews. At first, everything works fine when there is only text content in the TextView. However, things become a little bit complicated when I introduced network images into my app. Inside the ImageGetter that I was using, I first download images asynchronously to the local cache. Later on I load those images into memory and show them on the screen. I see notable lag when using ListView as the outside container when it was loading images from the local cache, so I switched to ScrollView later and rendered the whole page of posts at once.

The only thing that we need to be careful about is the usage of images. Large images can consume a lot of memory and make the ScrollView really laggy. Remember to resize those images when loading them into the memory.

Right now I still have some issues with the interaction between my TextView and the other parts of the app. I don’t have time to fix those issues and see whether they come from the HTML that Jsoup generates or the way I use TextView for the time being. But I’m pretty confident that these problems are not unsolvable and actually I have got some ideas on how to handle them. For the time being, TextView + ScrollView may be the solution for extreme complex and dynamic ListView with good user experience with better memory performance than WebView + ScrollView if you do not want to parse HTML and analyze the content to distinguish text parts and image parts.

This article is written as a complement to my Zhihu answer. In my opinion, the reason why we are having so many problems with ListView is the problematic designing of ListView that comes out from Google. Here are my suggestions on how to improve the performance of ListView from the Android designing perspective.

  1. Prepare more Views before scrolling. Currently the ListView will only prepare one more View that are invisible to the user but I believe its not enough. The number of views to be rendered should be extended.
  2. Android should introduce a kind of @PausableTask that is run on the UI thread but pausable to let the UI thread draw things to be shown on the screen. We can only show the basic outlines of the items in the initialization of a View and then gradually fill it with detailed content using the intervals of UI refresh, just like the way that Facebook used in its webpage, filling the page with place holders and filling those holders with content later on.