Created
October 7, 2014 10:37
-
-
Save dataliven/53c1706fb633fad42c1b to your computer and use it in GitHub Desktop.
PredictionIO docker run log
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
nc: connect to localhost port 7070 (tcp) failed: Connection refused | |
nc: connect to localhost port 7070 (tcp) failed: Connection refused | |
nc: connect to localhost port 7070 (tcp) failed: Connection refused | |
nc: connect to localhost port 7070 (tcp) failed: Connection refused | |
nc: connect to localhost port 7070 (tcp) failed: Connection refused | |
nc: connect to localhost port 7070 (tcp) failed: Connection refused | |
nc: connect to localhost port 7070 (tcp) failed: Connection refused | |
nc: connect to localhost port 7070 (tcp) failed: Connection refused | |
nc: connect to localhost port 7070 (tcp) failed: Connection refused | |
nc: connect to localhost port 7070 (tcp) failed: Connection refused | |
nc: connect to localhost port 7070 (tcp) failed: Connection refused | |
nc: connect to localhost port 7070 (tcp) failed: Connection refused | |
nc: connect to localhost port 7070 (tcp) failed: Connection refused | |
nc: connect to localhost port 7070 (tcp) failed: Connection refused | |
nc: connect to localhost port 7070 (tcp) failed: Connection refused | |
nc: connect to localhost port 7070 (tcp) failed: Connection refused | |
nc: connect to localhost port 7070 (tcp) failed: Connection refused | |
Connection to localhost 7070 port [tcp/*] succeeded! | |
Set user 1 | |
Set user 2 | |
Set user 3 | |
Set user 4 | |
Set user 5 | |
Set user 6 | |
Set user 7 | |
Set user 8 | |
Set user 9 | |
Set user 10 | |
Set item 1 | |
Set item 2 | |
Set item 3 | |
Set item 4 | |
Set item 5 | |
Set item 6 | |
Set item 7 | |
Set item 8 | |
Set item 9 | |
Set item 10 | |
Set item 11 | |
Set item 12 | |
Set item 13 | |
Set item 14 | |
Set item 15 | |
Set item 16 | |
Set item 17 | |
Set item 18 | |
Set item 19 | |
Set item 20 | |
Set item 21 | |
Set item 22 | |
Set item 23 | |
Set item 24 | |
Set item 25 | |
Set item 26 | |
Set item 27 | |
Set item 28 | |
Set item 29 | |
Set item 30 | |
Set item 31 | |
Set item 32 | |
Set item 33 | |
Set item 34 | |
Set item 35 | |
Set item 36 | |
Set item 37 | |
Set item 38 | |
Set item 39 | |
Set item 40 | |
Set item 41 | |
Set item 42 | |
Set item 43 | |
Set item 44 | |
Set item 45 | |
Set item 46 | |
Set item 47 | |
Set item 48 | |
Set item 49 | |
Set item 50 | |
User 1 views item 19 | |
User 1 views item 9 | |
User 1 views item 8 | |
User 1 views item 38 | |
User 1 views item 30 | |
User 1 views item 42 | |
User 1 views item 28 | |
User 1 views item 15 | |
User 1 views item 33 | |
User 1 views item 34 | |
User 2 views item 27 | |
User 2 views item 2 | |
User 2 views item 17 | |
User 2 views item 6 | |
User 2 views item 3 | |
User 2 views item 14 | |
User 2 views item 5 | |
User 2 views item 31 | |
User 2 views item 13 | |
User 2 views item 29 | |
User 3 views item 17 | |
User 3 views item 38 | |
User 3 views item 11 | |
User 3 views item 35 | |
User 3 views item 40 | |
User 3 views item 24 | |
User 3 views item 23 | |
User 3 views item 44 | |
User 3 views item 47 | |
User 3 views item 34 | |
User 4 views item 40 | |
User 4 views item 48 | |
User 4 views item 2 | |
User 4 views item 20 | |
User 4 views item 37 | |
User 4 views item 13 | |
User 4 views item 49 | |
User 4 views item 42 | |
User 4 views item 27 | |
User 4 views item 45 | |
User 5 views item 24 | |
User 5 views item 25 | |
User 5 views item 46 | |
User 5 views item 47 | |
User 5 views item 1 | |
User 5 views item 49 | |
User 5 views item 13 | |
User 5 views item 32 | |
User 5 views item 12 | |
User 5 views item 30 | |
User 6 views item 3 | |
User 6 views item 34 | |
User 6 views item 39 | |
User 6 views item 38 | |
User 6 views item 6 | |
User 6 views item 15 | |
User 6 views item 4 | |
User 6 views item 23 | |
User 6 views item 26 | |
User 6 views item 1 | |
User 7 views item 24 | |
User 7 views item 3 | |
User 7 views item 45 | |
User 7 views item 47 | |
User 7 views item 16 | |
User 7 views item 12 | |
User 7 views item 38 | |
User 7 views item 10 | |
User 7 views item 11 | |
User 7 views item 21 | |
User 8 views item 40 | |
User 8 views item 13 | |
User 8 views item 10 | |
User 8 views item 42 | |
User 8 views item 29 | |
User 8 views item 33 | |
User 8 views item 14 | |
User 8 views item 41 | |
User 8 views item 27 | |
User 8 views item 16 | |
User 9 views item 21 | |
User 9 views item 5 | |
User 9 views item 18 | |
User 9 views item 15 | |
User 9 views item 48 | |
User 9 views item 47 | |
User 9 views item 24 | |
User 9 views item 46 | |
User 9 views item 2 | |
User 9 views item 8 | |
User 10 views item 40 | |
User 10 views item 37 | |
User 10 views item 31 | |
User 10 views item 42 | |
User 10 views item 11 | |
User 10 views item 47 | |
User 10 views item 48 | |
User 10 views item 8 | |
User 10 views item 15 | |
User 10 views item 2 | |
2014-10-07 10:23:36,573 INFO tools.Console$ - Engine instance created in subdirectory io.prediction.engines.itemrank. | |
2014-10-07 10:23:38,539 INFO elasticsearch.plugins - [The Entity] loaded [], sites [] | |
2014-10-07 10:23:39,786 INFO tools.Console$ - Registering a built-in engine. | |
2014-10-07 10:23:40,264 WARN util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable | |
2014-10-07 10:23:40,551 INFO tools.RegisterEngine$ - Copying file:/PredictionIO/lib/engines_2.10-0.8.0.jar to file:/root/.pio_store/engines/io.prediction.engines.itemrank/0.8.0/engines_2.10-0.8.0.jar | |
2014-10-07 10:23:40,653 INFO tools.RegisterEngine$ - Copying file:/PredictionIO/lib/engines-assembly-0.8.0-deps.jar to file:/root/.pio_store/engines/io.prediction.engines.itemrank/0.8.0/engines-assembly-0.8.0-deps.jar | |
2014-10-07 10:23:41,557 INFO tools.RegisterEngine$ - Registering engine io.prediction.engines.itemrank 0.8.0 | |
2014-10-07 10:23:45,730 INFO elasticsearch.plugins - [Stygorr] loaded [], sites [] | |
2014-10-07 10:23:47,345 WARN util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable | |
2014-10-07 10:23:47,748 INFO tools.RunWorkflow$ - Submission command: /spark/bin/spark-submit --class io.prediction.workflow.CreateWorkflow --name PredictionIO Training: io.prediction.engines.itemrank 0.8.0 (Transient Lazy Val) --jars file:/root/.pio_store/engines/io.prediction.engines.itemrank/0.8.0/engines-assembly-0.8.0-deps.jar,file:/root/.pio_store/engines/io.prediction.engines.itemrank/0.8.0/engines_2.10-0.8.0.jar /PredictionIO/lib/pio-assembly-0.8.0.jar --env PIO_STORAGE_SOURCES_HBASE_TYPE=hbase,PIO_ENV_LOADED=1,PIO_STORAGE_SOURCES_HBASE_HOSTS=0,PIO_STORAGE_REPOSITORIES_APPDATA_NAME=predictionio_appdata,PIO_STORAGE_REPOSITORIES_METADATA_NAME=predictionio_metadata,PIO_FS_BASEDIR=/root/.pio_store,PIO_STORAGE_SOURCES_ELASTICSEARCH_HOSTS=localhost,PIO_HOME=/PredictionIO,PIO_FS_ENGINESDIR=/root/.pio_store/engines,PIO_STORAGE_SOURCES_HBASE_PORTS=0,PIO_STORAGE_SOURCES_ELASTICSEARCH_TYPE=elasticsearch,PIO_STORAGE_REPOSITORIES_METADATA_SOURCE=ELASTICSEARCH,PIO_STORAGE_REPOSITORIES_MODELDATA_SOURCE=LOCALFS,PIO_STORAGE_REPOSITORIES_EVENTDATA_NAME=predictionio_eventdata,PIO_STORAGE_REPOSITORIES_APPDATA_SOURCE=ELASTICSEARCH,PIO_FS_TMPDIR=/root/.pio_store/tmp,PIO_STORAGE_REPOSITORIES_MODELDATA_NAME=pio_,PIO_STORAGE_SOURCES_LOCALFS_HOSTS=/root/.pio_store/models,PIO_STORAGE_REPOSITORIES_EVENTDATA_SOURCE=HBASE,PIO_STORAGE_SOURCES_LOCALFS_PORTS=0,PIO_STORAGE_SOURCES_ELASTICSEARCH_PORTS=9300,PIO_STORAGE_SOURCES_LOCALFS_TYPE=localfs --engineId io.prediction.engines.itemrank --engineVersion 0.8.0 --engineFactory io.prediction.engines.itemrank.ItemRankEngine --batch Transient Lazy Val --jsonBasePath params --dsp datasource.json --ap algorithms.json --pp preparator.json --sp serving.json | |
Spark assembly has been built with Hive, including Datanucleus jars on classpath | |
2014-10-07 10:23:49,876 WARN util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable | |
2014-10-07 10:23:52,674 WARN workflow.WorkflowUtils$ - Non-empty parameters supplied to io.prediction.engines.itemrank.ItemRankServing, but its constructor does not accept any arguments. Stubbing with empty parameters. | |
2014-10-07 10:23:53,117 INFO elasticsearch.plugins - [Monet St. Croix] loaded [], sites [] | |
2014-10-07 10:23:54,501 INFO workflow.CoreWorkflow$ - CoreWorkflow.run | |
2014-10-07 10:23:54,502 INFO workflow.CoreWorkflow$ - Start spark context | |
2014-10-07 10:23:54,517 INFO workflow.WorkflowContext$ - Environment received: Map(PIO_STORAGE_SOURCES_HBASE_TYPE -> hbase, PIO_ENV_LOADED -> 1, PIO_STORAGE_SOURCES_HBASE_HOSTS -> 0, PIO_STORAGE_REPOSITORIES_APPDATA_NAME -> predictionio_appdata, PIO_STORAGE_REPOSITORIES_METADATA_NAME -> predictionio_metadata, PIO_FS_BASEDIR -> /root/.pio_store, PIO_STORAGE_SOURCES_ELASTICSEARCH_HOSTS -> localhost, PIO_HOME -> /PredictionIO, PIO_FS_ENGINESDIR -> /root/.pio_store/engines, PIO_STORAGE_SOURCES_HBASE_PORTS -> 0, PIO_STORAGE_SOURCES_ELASTICSEARCH_TYPE -> elasticsearch, PIO_STORAGE_REPOSITORIES_METADATA_SOURCE -> ELASTICSEARCH, PIO_STORAGE_REPOSITORIES_MODELDATA_SOURCE -> LOCALFS, PIO_STORAGE_REPOSITORIES_EVENTDATA_NAME -> predictionio_eventdata, PIO_STORAGE_REPOSITORIES_APPDATA_SOURCE -> ELASTICSEARCH, PIO_FS_TMPDIR -> /root/.pio_store/tmp, PIO_STORAGE_REPOSITORIES_MODELDATA_NAME -> pio_, PIO_STORAGE_SOURCES_LOCALFS_HOSTS -> /root/.pio_store/models, PIO_STORAGE_REPOSITORIES_EVENTDATA_SOURCE -> HBASE, PIO_STORAGE_SOURCES_LOCALFS_PORTS -> 0, PIO_STORAGE_SOURCES_ELASTICSEARCH_PORTS -> 9300, PIO_STORAGE_SOURCES_LOCALFS_TYPE -> localfs) | |
2014-10-07 10:23:54,520 INFO workflow.WorkflowContext$ - SparkConf executor environment: ArraySeq((PIO_STORAGE_REPOSITORIES_EVENTDATA_SOURCE,HBASE), (PIO_STORAGE_REPOSITORIES_METADATA_NAME,predictionio_metadata), (PIO_HOME,/PredictionIO), (PIO_STORAGE_SOURCES_LOCALFS_HOSTS,/root/.pio_store/models), (PIO_STORAGE_SOURCES_ELASTICSEARCH_PORTS,9300), (PIO_FS_ENGINESDIR,/root/.pio_store/engines), (PIO_STORAGE_SOURCES_HBASE_HOSTS,0), (PIO_ENV_LOADED,1), (PIO_STORAGE_REPOSITORIES_METADATA_SOURCE,ELASTICSEARCH), (PIO_STORAGE_SOURCES_ELASTICSEARCH_TYPE,elasticsearch), (PIO_FS_TMPDIR,/root/.pio_store/tmp), (PIO_STORAGE_REPOSITORIES_APPDATA_SOURCE,ELASTICSEARCH), (PIO_STORAGE_REPOSITORIES_MODELDATA_SOURCE,LOCALFS), (PIO_STORAGE_REPOSITORIES_APPDATA_NAME,predictionio_appdata), (PIO_STORAGE_SOURCES_LOCALFS_PORTS,0), (PIO_STORAGE_REPOSITORIES_EVENTDATA_NAME,predictionio_eventdata), (PIO_STORAGE_SOURCES_LOCALFS_TYPE,localfs), (PIO_STORAGE_SOURCES_ELASTICSEARCH_HOSTS,localhost), (PIO_STORAGE_SOURCES_HBASE_PORTS,0), (PIO_STORAGE_SOURCES_HBASE_TYPE,hbase), (PIO_STORAGE_REPOSITORIES_MODELDATA_NAME,pio_), (PIO_FS_BASEDIR,/root/.pio_store)) | |
2014-10-07 10:23:54,576 INFO spark.SecurityManager - Changing view acls to: root, | |
2014-10-07 10:23:54,576 INFO spark.SecurityManager - Changing modify acls to: root, | |
2014-10-07 10:23:54,577 INFO spark.SecurityManager - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root, ); users with modify permissions: Set(root, ) | |
2014-10-07 10:23:55,495 INFO slf4j.Slf4jLogger - Slf4jLogger started | |
2014-10-07 10:23:55,554 INFO Remoting - Starting remoting | |
2014-10-07 10:23:55,741 INFO Remoting - Remoting started; listening on addresses :[akka.tcp://sparkDriver@083b1e7e4ebf:40005] | |
2014-10-07 10:23:55,746 INFO Remoting - Remoting now listens on addresses: [akka.tcp://sparkDriver@083b1e7e4ebf:40005] | |
2014-10-07 10:23:55,758 INFO util.Utils - Successfully started service 'sparkDriver' on port 40005. | |
2014-10-07 10:23:55,909 INFO spark.SparkEnv - Registering MapOutputTracker | |
2014-10-07 10:23:55,921 INFO spark.SparkEnv - Registering BlockManagerMaster | |
2014-10-07 10:23:55,953 INFO storage.DiskBlockManager - Created local directory at /tmp/spark-local-20141007102355-ae79 | |
2014-10-07 10:23:55,978 INFO util.Utils - Successfully started service 'Connection manager for block manager' on port 39812. | |
2014-10-07 10:23:55,979 INFO network.ConnectionManager - Bound socket to port 39812 with id = ConnectionManagerId(083b1e7e4ebf,39812) | |
2014-10-07 10:23:55,988 INFO storage.MemoryStore - MemoryStore started with capacity 267.3 MB | |
2014-10-07 10:23:55,995 INFO storage.BlockManagerMaster - Trying to register BlockManager | |
2014-10-07 10:23:55,997 INFO storage.BlockManagerMasterActor - Registering block manager 083b1e7e4ebf:39812 with 267.3 MB RAM | |
2014-10-07 10:23:55,999 INFO storage.BlockManagerMaster - Registered BlockManager | |
2014-10-07 10:23:56,014 INFO spark.HttpFileServer - HTTP File server directory is /tmp/spark-277b6a22-cfb9-482a-88c0-2d088c370ab2 | |
2014-10-07 10:23:56,018 INFO spark.HttpServer - Starting HTTP Server | |
2014-10-07 10:23:56,097 INFO server.Server - jetty-8.y.z-SNAPSHOT | |
2014-10-07 10:23:56,116 INFO server.AbstractConnector - Started SocketConnector@0.0.0.0:46162 | |
2014-10-07 10:23:56,116 INFO util.Utils - Successfully started service 'HTTP file server' on port 46162. | |
2014-10-07 10:23:56,912 INFO server.Server - jetty-8.y.z-SNAPSHOT | |
2014-10-07 10:23:56,951 INFO server.AbstractConnector - Started SelectChannelConnector@0.0.0.0:4040 | |
2014-10-07 10:23:56,951 INFO util.Utils - Successfully started service 'SparkUI' on port 4040. | |
2014-10-07 10:23:56,956 INFO ui.SparkUI - Started SparkUI at http://083b1e7e4ebf:4040 | |
2014-10-07 10:23:58,148 INFO spark.SparkContext - Added JAR file:/root/.pio_store/engines/io.prediction.engines.itemrank/0.8.0/engines-assembly-0.8.0-deps.jar at http://172.17.0.3:46162/jars/engines-assembly-0.8.0-deps.jar with timestamp 1412677438148 | |
2014-10-07 10:23:58,154 INFO spark.SparkContext - Added JAR file:/root/.pio_store/engines/io.prediction.engines.itemrank/0.8.0/engines_2.10-0.8.0.jar at http://172.17.0.3:46162/jars/engines_2.10-0.8.0.jar with timestamp 1412677438154 | |
2014-10-07 10:23:58,565 INFO spark.SparkContext - Added JAR file:/PredictionIO/lib/pio-assembly-0.8.0.jar at http://172.17.0.3:46162/jars/pio-assembly-0.8.0.jar with timestamp 1412677438565 | |
2014-10-07 10:23:58,633 INFO util.AkkaUtils - Connecting to HeartbeatReceiver: akka.tcp://sparkDriver@083b1e7e4ebf:40005/user/HeartbeatReceiver | |
2014-10-07 10:23:58,662 INFO workflow.CoreWorkflow$ - Data Source | |
2014-10-07 10:23:58,876 INFO spark.SparkContext - Starting job: collect at DataSource.scala:49 | |
2014-10-07 10:23:58,895 INFO scheduler.DAGScheduler - Got job 0 (collect at DataSource.scala:49) with 1 output partitions (allowLocal=false) | |
2014-10-07 10:23:58,895 INFO scheduler.DAGScheduler - Final stage: Stage 0(collect at DataSource.scala:49) | |
2014-10-07 10:23:58,896 INFO scheduler.DAGScheduler - Parents of final stage: List() | |
2014-10-07 10:23:58,901 INFO scheduler.DAGScheduler - Missing parents: List() | |
2014-10-07 10:23:58,906 INFO scheduler.DAGScheduler - Submitting Stage 0 (MappedRDD[3] at map at DataSource.scala:49), which has no missing parents | |
2014-10-07 10:23:59,073 INFO storage.MemoryStore - ensureFreeSpace(3424) called with curMem=0, maxMem=280248975 | |
2014-10-07 10:23:59,076 INFO storage.MemoryStore - Block broadcast_0 stored as values in memory (estimated size 3.3 KB, free 267.3 MB) | |
2014-10-07 10:23:59,097 INFO scheduler.DAGScheduler - Submitting 1 missing tasks from Stage 0 (MappedRDD[3] at map at DataSource.scala:49) | |
2014-10-07 10:23:59,098 INFO scheduler.TaskSchedulerImpl - Adding task set 0.0 with 1 tasks | |
2014-10-07 10:23:59,126 INFO scheduler.TaskSetManager - Starting task 0.0 in stage 0.0 (TID 0, localhost, PROCESS_LOCAL, 1456 bytes) | |
2014-10-07 10:23:59,139 INFO executor.Executor - Running task 0.0 in stage 0.0 (TID 0) | |
2014-10-07 10:23:59,153 INFO executor.Executor - Fetching http://172.17.0.3:46162/jars/engines-assembly-0.8.0-deps.jar with timestamp 1412677438148 | |
2014-10-07 10:23:59,155 INFO util.Utils - Fetching http://172.17.0.3:46162/jars/engines-assembly-0.8.0-deps.jar to /tmp/fetchFileTemp6853469657560464093.tmp | |
2014-10-07 10:24:00,409 INFO executor.Executor - Adding file:/tmp/spark-ea8f1677-a255-41ff-a5a0-1d775ade5207/engines-assembly-0.8.0-deps.jar to class loader | |
2014-10-07 10:24:00,409 INFO executor.Executor - Fetching http://172.17.0.3:46162/jars/pio-assembly-0.8.0.jar with timestamp 1412677438565 | |
2014-10-07 10:24:00,410 INFO util.Utils - Fetching http://172.17.0.3:46162/jars/pio-assembly-0.8.0.jar to /tmp/fetchFileTemp8790159946237430241.tmp | |
2014-10-07 10:24:01,042 INFO executor.Executor - Adding file:/tmp/spark-ea8f1677-a255-41ff-a5a0-1d775ade5207/pio-assembly-0.8.0.jar to class loader | |
2014-10-07 10:24:01,043 INFO executor.Executor - Fetching http://172.17.0.3:46162/jars/engines_2.10-0.8.0.jar with timestamp 1412677438154 | |
2014-10-07 10:24:01,047 INFO util.Utils - Fetching http://172.17.0.3:46162/jars/engines_2.10-0.8.0.jar to /tmp/fetchFileTemp796908445782571784.tmp | |
2014-10-07 10:24:01,069 INFO executor.Executor - Adding file:/tmp/spark-ea8f1677-a255-41ff-a5a0-1d775ade5207/engines_2.10-0.8.0.jar to class loader | |
2014-10-07 10:24:01,089 INFO spark.CacheManager - Partition rdd_2_0 not found, computing it | |
2014-10-07 10:24:01,690 INFO zookeeper.ZooKeeper - Client environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52 GMT | |
2014-10-07 10:24:01,691 INFO zookeeper.ZooKeeper - Client environment:host.name=083b1e7e4ebf | |
2014-10-07 10:24:01,691 INFO zookeeper.ZooKeeper - Client environment:java.version=1.7.0_67 | |
2014-10-07 10:24:01,691 INFO zookeeper.ZooKeeper - Client environment:java.vendor=Oracle Corporation | |
2014-10-07 10:24:01,691 INFO zookeeper.ZooKeeper - Client environment:java.home=/usr/lib/jvm/java-7-oracle/jre | |
2014-10-07 10:24:01,691 INFO zookeeper.ZooKeeper - Client environment:java.class.path=::/spark/conf:/spark/lib/spark-assembly-1.1.0-hadoop2.4.0.jar:/spark/lib/datanucleus-core-3.2.2.jar:/spark/lib/datanucleus-rdbms-3.2.1.jar:/spark/lib/datanucleus-api-jdo-3.2.1.jar | |
2014-10-07 10:24:01,691 INFO zookeeper.ZooKeeper - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib | |
2014-10-07 10:24:01,691 INFO zookeeper.ZooKeeper - Client environment:java.io.tmpdir=/tmp | |
2014-10-07 10:24:01,691 INFO zookeeper.ZooKeeper - Client environment:java.compiler=<NA> | |
2014-10-07 10:24:01,691 INFO zookeeper.ZooKeeper - Client environment:os.name=Linux | |
2014-10-07 10:24:01,691 INFO zookeeper.ZooKeeper - Client environment:os.arch=amd64 | |
2014-10-07 10:24:01,691 INFO zookeeper.ZooKeeper - Client environment:os.version=3.13.0-32-generic | |
2014-10-07 10:24:01,691 INFO zookeeper.ZooKeeper - Client environment:user.name=root | |
2014-10-07 10:24:01,691 INFO zookeeper.ZooKeeper - Client environment:user.home=/root | |
2014-10-07 10:24:01,691 INFO zookeeper.ZooKeeper - Client environment:user.dir=/quickstartapp/io.prediction.engines.itemrank | |
2014-10-07 10:24:01,692 INFO zookeeper.ZooKeeper - Initiating client connection, connectString=localhost:2181 sessionTimeout=90000 watcher=hconnection-0x15fd3cb4, quorum=localhost:2181, baseZNode=/hbase | |
2014-10-07 10:24:01,742 INFO zookeeper.RecoverableZooKeeper - Process identifier=hconnection-0x15fd3cb4 connecting to ZooKeeper ensemble=localhost:2181 | |
2014-10-07 10:24:01,752 INFO zookeeper.ClientCnxn - Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL (unknown error) | |
2014-10-07 10:24:01,754 INFO zookeeper.ClientCnxn - Socket connection established to localhost/0:0:0:0:0:0:0:1:2181, initiating session | |
2014-10-07 10:24:01,766 INFO zookeeper.ClientCnxn - Session establishment complete on server localhost/0:0:0:0:0:0:0:1:2181, sessionid = 0x148ea22357f000d, negotiated timeout = 40000 | |
2014-10-07 10:24:02,660 INFO zookeeper.ZooKeeper - Initiating client connection, connectString=localhost:2181 sessionTimeout=90000 watcher=hconnection-0xac291e1, quorum=localhost:2181, baseZNode=/hbase | |
2014-10-07 10:24:02,667 INFO zookeeper.RecoverableZooKeeper - Process identifier=hconnection-0xac291e1 connecting to ZooKeeper ensemble=localhost:2181 | |
2014-10-07 10:24:02,670 INFO zookeeper.ClientCnxn - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) | |
2014-10-07 10:24:02,671 INFO zookeeper.ClientCnxn - Socket connection established to localhost/127.0.0.1:2181, initiating session | |
2014-10-07 10:24:02,673 INFO zookeeper.ClientCnxn - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x148ea22357f000e, negotiated timeout = 40000 | |
2014-10-07 10:24:02,681 INFO zookeeper.ZooKeeper - Initiating client connection, connectString=localhost:2181 sessionTimeout=90000 watcher=catalogtracker-on-hconnection-0xac291e1, quorum=localhost:2181, baseZNode=/hbase | |
2014-10-07 10:24:02,691 INFO zookeeper.ClientCnxn - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) | |
2014-10-07 10:24:02,692 INFO zookeeper.ClientCnxn - Socket connection established to localhost/127.0.0.1:2181, initiating session | |
2014-10-07 10:24:02,695 INFO zookeeper.ClientCnxn - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x148ea22357f000f, negotiated timeout = 40000 | |
2014-10-07 10:24:02,695 INFO zookeeper.RecoverableZooKeeper - Process identifier=catalogtracker-on-hconnection-0xac291e1 connecting to ZooKeeper ensemble=localhost:2181 | |
2014-10-07 10:24:02,790 INFO Configuration.deprecation - hadoop.native.lib is deprecated. Instead, use io.native.lib.available | |
2014-10-07 10:24:03,263 INFO client.HConnectionManager$HConnectionImplementation - Closing zookeeper sessionid=0x148ea22357f000e | |
2014-10-07 10:24:03,266 INFO zookeeper.ClientCnxn - EventThread shut down | |
2014-10-07 10:24:03,267 INFO zookeeper.ZooKeeper - Session: 0x148ea22357f000e closed | |
2014-10-07 10:24:03,271 INFO zookeeper.ClientCnxn - EventThread shut down | |
2014-10-07 10:24:03,271 INFO zookeeper.ZooKeeper - Session: 0x148ea22357f000f closed | |
2014-10-07 10:24:03,896 INFO storage.MemoryStore - ensureFreeSpace(25400) called with curMem=3424, maxMem=280248975 | |
2014-10-07 10:24:03,897 INFO storage.MemoryStore - Block rdd_2_0 stored as values in memory (estimated size 24.8 KB, free 267.2 MB) | |
2014-10-07 10:24:03,898 INFO storage.BlockManagerInfo - Added rdd_2_0 in memory on 083b1e7e4ebf:39812 (size: 24.8 KB, free: 267.2 MB) | |
2014-10-07 10:24:03,899 INFO storage.BlockManagerMaster - Updated info of block rdd_2_0 | |
2014-10-07 10:24:03,912 INFO executor.Executor - Finished task 0.0 in stage 0.0 (TID 0). 1344 bytes result sent to driver | |
2014-10-07 10:24:03,920 INFO scheduler.DAGScheduler - Stage 0 (collect at DataSource.scala:49) finished in 4.810 s | |
2014-10-07 10:24:03,928 INFO scheduler.TaskSetManager - Finished task 0.0 in stage 0.0 (TID 0) in 4798 ms on localhost (1/1) | |
2014-10-07 10:24:03,931 INFO scheduler.TaskSchedulerImpl - Removed TaskSet 0.0, whose tasks have all completed, from pool | |
2014-10-07 10:24:03,934 INFO spark.SparkContext - Job finished: collect at DataSource.scala:49, took 5.056824094 s | |
2014-10-07 10:24:04,003 INFO workflow.CoreWorkflow$ - Number of training set: 1 | |
2014-10-07 10:24:04,031 INFO spark.SparkContext - Starting job: collect at WorkflowUtils.scala:179 | |
2014-10-07 10:24:04,034 INFO scheduler.DAGScheduler - Got job 1 (collect at WorkflowUtils.scala:179) with 1 output partitions (allowLocal=false) | |
2014-10-07 10:24:04,034 INFO scheduler.DAGScheduler - Final stage: Stage 1(collect at WorkflowUtils.scala:179) | |
2014-10-07 10:24:04,034 INFO scheduler.DAGScheduler - Parents of final stage: List() | |
2014-10-07 10:24:04,037 INFO scheduler.DAGScheduler - Missing parents: List() | |
2014-10-07 10:24:04,044 INFO scheduler.DAGScheduler - Submitting Stage 1 (MappedRDD[6] at map at DataSource.scala:53), which has no missing parents | |
2014-10-07 10:24:04,053 INFO storage.MemoryStore - ensureFreeSpace(3912) called with curMem=28824, maxMem=280248975 | |
2014-10-07 10:24:04,056 INFO storage.MemoryStore - Block broadcast_1 stored as values in memory (estimated size 3.8 KB, free 267.2 MB) | |
2014-10-07 10:24:04,067 INFO scheduler.DAGScheduler - Submitting 1 missing tasks from Stage 1 (MappedRDD[6] at map at DataSource.scala:53) | |
2014-10-07 10:24:04,067 INFO scheduler.TaskSchedulerImpl - Adding task set 1.0 with 1 tasks | |
2014-10-07 10:24:04,072 INFO scheduler.TaskSetManager - Starting task 0.0 in stage 1.0 (TID 1, localhost, ANY, 1456 bytes) | |
2014-10-07 10:24:04,073 INFO executor.Executor - Running task 0.0 in stage 1.0 (TID 1) | |
2014-10-07 10:24:04,085 INFO storage.BlockManager - Found block rdd_2_0 locally | |
2014-10-07 10:24:04,113 INFO executor.Executor - Finished task 0.0 in stage 1.0 (TID 1). 8563 bytes result sent to driver | |
2014-10-07 10:24:04,138 INFO scheduler.DAGScheduler - Stage 1 (collect at WorkflowUtils.scala:179) finished in 0.060 s | |
2014-10-07 10:24:04,139 INFO spark.SparkContext - Job finished: collect at WorkflowUtils.scala:179, took 0.106042199 s | |
2014-10-07 10:24:04,144 INFO scheduler.TaskSetManager - Finished task 0.0 in stage 1.0 (TID 1) in 47 ms on localhost (1/1) | |
2014-10-07 10:24:04,144 INFO scheduler.TaskSchedulerImpl - Removed TaskSet 1.0, whose tasks have all completed, from pool | |
2014-10-07 10:24:04,145 INFO spark.SparkContext - Starting job: collect at Workflow.scala:388 | |
2014-10-07 10:24:04,146 INFO scheduler.DAGScheduler - Got job 2 (collect at Workflow.scala:388) with 1 output partitions (allowLocal=false) | |
2014-10-07 10:24:04,146 INFO scheduler.DAGScheduler - Final stage: Stage 2(collect at Workflow.scala:388) | |
2014-10-07 10:24:04,146 INFO scheduler.DAGScheduler - Parents of final stage: List() | |
2014-10-07 10:24:04,149 INFO scheduler.DAGScheduler - Missing parents: List() | |
2014-10-07 10:24:04,149 INFO scheduler.DAGScheduler - Submitting Stage 2 (FlatMappedRDD[8] at flatMap at DataSource.scala:54), which has no missing parents | |
2014-10-07 10:24:04,153 INFO storage.MemoryStore - ensureFreeSpace(4088) called with curMem=32736, maxMem=280248975 | |
2014-10-07 10:24:04,153 INFO storage.MemoryStore - Block broadcast_2 stored as values in memory (estimated size 4.0 KB, free 267.2 MB) | |
2014-10-07 10:24:04,155 INFO scheduler.DAGScheduler - Submitting 1 missing tasks from Stage 2 (FlatMappedRDD[8] at flatMap at DataSource.scala:54) | |
2014-10-07 10:24:04,155 INFO scheduler.TaskSchedulerImpl - Adding task set 2.0 with 1 tasks | |
2014-10-07 10:24:04,156 INFO scheduler.TaskSetManager - Starting task 0.0 in stage 2.0 (TID 2, localhost, ANY, 1456 bytes) | |
2014-10-07 10:24:04,156 INFO executor.Executor - Running task 0.0 in stage 2.0 (TID 2) | |
2014-10-07 10:24:04,164 INFO storage.BlockManager - Found block rdd_2_0 locally | |
2014-10-07 10:24:04,165 INFO executor.Executor - Finished task 0.0 in stage 2.0 (TID 2). 1689 bytes result sent to driver | |
2014-10-07 10:24:04,168 INFO scheduler.DAGScheduler - Stage 2 (collect at Workflow.scala:388) finished in 0.006 s | |
2014-10-07 10:24:04,168 INFO spark.SparkContext - Job finished: collect at Workflow.scala:388, took 0.023404118 s | |
2014-10-07 10:24:04,170 INFO workflow.CoreWorkflow$ - Data Set 0 | |
2014-10-07 10:24:04,170 INFO workflow.CoreWorkflow$ - Params: null | |
2014-10-07 10:24:04,171 INFO workflow.CoreWorkflow$ - TrainingData: | |
2014-10-07 10:24:04,172 INFO workflow.CoreWorkflow$ - [TrainingData:Map(5 -> 10, 10 -> 3)... Map(5 -> 19, 10 -> 11)... List(7 5 view, 7 11 view)...] | |
2014-10-07 10:24:04,172 INFO scheduler.TaskSetManager - Finished task 0.0 in stage 2.0 (TID 2) in 12 ms on localhost (1/1) | |
2014-10-07 10:24:04,173 INFO scheduler.TaskSchedulerImpl - Removed TaskSet 2.0, whose tasks have all completed, from pool | |
2014-10-07 10:24:04,173 INFO workflow.CoreWorkflow$ - TestingData: (count=0) | |
2014-10-07 10:24:04,175 INFO workflow.CoreWorkflow$ - Data source complete | |
2014-10-07 10:24:04,175 INFO workflow.CoreWorkflow$ - Preparator | |
2014-10-07 10:24:04,200 INFO spark.SparkContext - Starting job: collect at WorkflowUtils.scala:179 | |
2014-10-07 10:24:04,201 INFO scheduler.DAGScheduler - Got job 3 (collect at WorkflowUtils.scala:179) with 1 output partitions (allowLocal=false) | |
2014-10-07 10:24:04,201 INFO scheduler.DAGScheduler - Final stage: Stage 3(collect at WorkflowUtils.scala:179) | |
2014-10-07 10:24:04,201 INFO scheduler.DAGScheduler - Parents of final stage: List() | |
2014-10-07 10:24:04,204 INFO scheduler.DAGScheduler - Missing parents: List() | |
2014-10-07 10:24:04,205 INFO scheduler.DAGScheduler - Submitting Stage 3 (MappedRDD[9] at map at Preparator.scala:42), which has no missing parents | |
2014-10-07 10:24:04,208 INFO storage.MemoryStore - ensureFreeSpace(5216) called with curMem=36824, maxMem=280248975 | |
2014-10-07 10:24:04,208 INFO storage.MemoryStore - Block broadcast_3 stored as values in memory (estimated size 5.1 KB, free 267.2 MB) | |
2014-10-07 10:24:04,210 INFO scheduler.DAGScheduler - Submitting 1 missing tasks from Stage 3 (MappedRDD[9] at map at Preparator.scala:42) | |
2014-10-07 10:24:04,210 INFO scheduler.TaskSchedulerImpl - Adding task set 3.0 with 1 tasks | |
2014-10-07 10:24:04,212 INFO scheduler.TaskSetManager - Starting task 0.0 in stage 3.0 (TID 3, localhost, ANY, 1456 bytes) | |
2014-10-07 10:24:04,212 INFO executor.Executor - Running task 0.0 in stage 3.0 (TID 3) | |
2014-10-07 10:24:04,220 INFO storage.BlockManager - Found block rdd_2_0 locally | |
2014-10-07 10:24:04,240 INFO executor.Executor - Finished task 0.0 in stage 3.0 (TID 3). 8303 bytes result sent to driver | |
2014-10-07 10:24:04,255 INFO scheduler.DAGScheduler - Stage 3 (collect at WorkflowUtils.scala:179) finished in 0.036 s | |
2014-10-07 10:24:04,256 INFO spark.SparkContext - Job finished: collect at WorkflowUtils.scala:179, took 0.055463278 s | |
2014-10-07 10:24:04,257 INFO workflow.CoreWorkflow$ - Prepared Data Set 0 | |
2014-10-07 10:24:04,257 INFO workflow.CoreWorkflow$ - Params: null | |
2014-10-07 10:24:04,258 INFO workflow.CoreWorkflow$ - PreparedData: [U: Map(5 -> 10, 10 -> 3)... I: Map(5 -> 19, 10 -> 11)... R: List(RatingTD: 6 7 3, RatingTD: 10 8 3)...] | |
2014-10-07 10:24:04,259 INFO workflow.CoreWorkflow$ - Preparator complete | |
2014-10-07 10:24:04,260 INFO workflow.CoreWorkflow$ - Algo model construction | |
2014-10-07 10:24:04,259 INFO scheduler.TaskSetManager - Finished task 0.0 in stage 3.0 (TID 3) in 32 ms on localhost (1/1) | |
2014-10-07 10:24:04,263 INFO scheduler.TaskSchedulerImpl - Removed TaskSet 3.0, whose tasks have all completed, from pool | |
2014-10-07 10:24:04,325 INFO workflow.CoreWorkflow$ - Model ei: 0 ai: 0 | |
2014-10-07 10:24:04,330 INFO spark.SparkContext - Starting job: collect at WorkflowUtils.scala:179 | |
2014-10-07 10:24:04,332 INFO scheduler.DAGScheduler - Got job 4 (collect at WorkflowUtils.scala:179) with 1 output partitions (allowLocal=false) | |
2014-10-07 10:24:04,332 INFO scheduler.DAGScheduler - Final stage: Stage 4(collect at WorkflowUtils.scala:179) | |
2014-10-07 10:24:04,332 INFO scheduler.DAGScheduler - Parents of final stage: List() | |
2014-10-07 10:24:04,334 INFO scheduler.DAGScheduler - Missing parents: List() | |
2014-10-07 10:24:04,335 INFO scheduler.DAGScheduler - Submitting Stage 4 (MappedRDD[10] at map at Algorithm.scala:52), which has no missing parents | |
2014-10-07 10:24:04,339 INFO storage.MemoryStore - ensureFreeSpace(6496) called with curMem=42040, maxMem=280248975 | |
2014-10-07 10:24:04,339 INFO storage.MemoryStore - Block broadcast_4 stored as values in memory (estimated size 6.3 KB, free 267.2 MB) | |
2014-10-07 10:24:04,341 INFO scheduler.DAGScheduler - Submitting 1 missing tasks from Stage 4 (MappedRDD[10] at map at Algorithm.scala:52) | |
2014-10-07 10:24:04,341 INFO scheduler.TaskSchedulerImpl - Adding task set 4.0 with 1 tasks | |
2014-10-07 10:24:04,342 INFO scheduler.TaskSetManager - Starting task 0.0 in stage 4.0 (TID 4, localhost, ANY, 1456 bytes) | |
2014-10-07 10:24:04,343 INFO executor.Executor - Running task 0.0 in stage 4.0 (TID 4) | |
2014-10-07 10:24:04,359 INFO storage.BlockManager - Found block rdd_2_0 locally | |
2014-10-07 10:24:04,578 INFO executor.Executor - Finished task 0.0 in stage 4.0 (TID 4). 26061 bytes result sent to driver | |
2014-10-07 10:24:04,624 INFO scheduler.DAGScheduler - Stage 4 (collect at WorkflowUtils.scala:179) finished in 0.269 s | |
2014-10-07 10:24:04,625 INFO spark.SparkContext - Job finished: collect at WorkflowUtils.scala:179, took 0.293755888 s | |
2014-10-07 10:24:04,626 INFO workflow.CoreWorkflow$ - [Map(8 -> Set((10,3), (16,3)), 4 -> Set((49,3), (37,3)))Map(45 -> Vector((20,0.7885419665305914), (12,0.5469243985794473)), 34 -> Vector((38,0.8853027504643262), (23,0.860897888071668)))...] | |
2014-10-07 10:24:04,627 INFO workflow.CoreWorkflow$ - Algo prediction | |
2014-10-07 10:24:04,629 INFO scheduler.TaskSetManager - Finished task 0.0 in stage 4.0 (TID 4) in 232 ms on localhost (1/1) | |
2014-10-07 10:24:04,630 INFO scheduler.TaskSchedulerImpl - Removed TaskSet 4.0, whose tasks have all completed, from pool | |
2014-10-07 10:24:04,633 INFO workflow.AlgoServerWrapper - predictionLocalModel | |
2014-10-07 10:24:04,652 INFO workflow.CoreWorkflow$ - Prediction 0 ZippedPartitionsRDD2[15] at zipPartitions at Workflow.scala:125 | |
2014-10-07 10:24:04,679 INFO spark.SparkContext - Starting job: collect at Workflow.scala:515 | |
2014-10-07 10:24:04,682 INFO scheduler.DAGScheduler - Got job 5 (collect at Workflow.scala:515) with 1 output partitions (allowLocal=false) | |
2014-10-07 10:24:04,683 INFO scheduler.DAGScheduler - Final stage: Stage 5(collect at Workflow.scala:515) | |
2014-10-07 10:24:04,683 INFO scheduler.DAGScheduler - Parents of final stage: List() | |
2014-10-07 10:24:04,689 INFO scheduler.DAGScheduler - Missing parents: List() | |
2014-10-07 10:24:04,690 INFO scheduler.DAGScheduler - Submitting Stage 5 (ZippedPartitionsRDD2[15] at zipPartitions at Workflow.scala:125), which has no missing parents | |
2014-10-07 10:24:04,699 INFO storage.MemoryStore - ensureFreeSpace(8632) called with curMem=48536, maxMem=280248975 | |
2014-10-07 10:24:04,700 INFO storage.MemoryStore - Block broadcast_5 stored as values in memory (estimated size 8.4 KB, free 267.2 MB) | |
2014-10-07 10:24:04,709 INFO scheduler.DAGScheduler - Submitting 1 missing tasks from Stage 5 (ZippedPartitionsRDD2[15] at zipPartitions at Workflow.scala:125) | |
2014-10-07 10:24:04,709 INFO scheduler.TaskSchedulerImpl - Adding task set 5.0 with 1 tasks | |
2014-10-07 10:24:04,711 INFO scheduler.TaskSetManager - Starting task 0.0 in stage 5.0 (TID 5, localhost, ANY, 2250 bytes) | |
2014-10-07 10:24:04,711 INFO executor.Executor - Running task 0.0 in stage 5.0 (TID 5) | |
2014-10-07 10:24:04,727 INFO storage.BlockManager - Found block rdd_2_0 locally | |
2014-10-07 10:24:04,813 INFO storage.BlockManager - Found block rdd_2_0 locally | |
2014-10-07 10:24:04,817 INFO scheduler.DAGScheduler - Stage 5 (collect at Workflow.scala:515) finished in 0.102 s | |
2014-10-07 10:24:04,818 INFO spark.SparkContext - Job finished: collect at Workflow.scala:515, took 0.138918213 s | |
2014-10-07 10:24:04,821 INFO scheduler.TaskSetManager - Finished task 0.0 in stage 5.0 (TID 5) in 107 ms on localhost (1/1) | |
2014-10-07 10:24:04,822 INFO scheduler.TaskSchedulerImpl - Removed TaskSet 5.0, whose tasks have all completed, from pool | |
2014-10-07 10:24:04,824 INFO executor.Executor - Finished task 0.0 in stage 5.0 (TID 5). 1689 bytes result sent to driver | |
2014-10-07 10:24:04,824 INFO spark.SparkContext - Starting job: count at Workflow.scala:526 | |
2014-10-07 10:24:04,826 INFO scheduler.DAGScheduler - Got job 6 (count at Workflow.scala:526) with 1 output partitions (allowLocal=false) | |
2014-10-07 10:24:04,826 INFO scheduler.DAGScheduler - Final stage: Stage 6(count at Workflow.scala:526) | |
2014-10-07 10:24:04,826 INFO scheduler.DAGScheduler - Parents of final stage: List() | |
2014-10-07 10:24:04,831 INFO scheduler.DAGScheduler - Missing parents: List() | |
2014-10-07 10:24:04,832 INFO scheduler.DAGScheduler - Submitting Stage 6 (ZippedPartitionsRDD2[15] at zipPartitions at Workflow.scala:125), which has no missing parents | |
2014-10-07 10:24:04,838 INFO storage.MemoryStore - ensureFreeSpace(8616) called with curMem=57168, maxMem=280248975 | |
2014-10-07 10:24:04,838 INFO storage.MemoryStore - Block broadcast_6 stored as values in memory (estimated size 8.4 KB, free 267.2 MB) | |
2014-10-07 10:24:04,839 INFO scheduler.DAGScheduler - Submitting 1 missing tasks from Stage 6 (ZippedPartitionsRDD2[15] at zipPartitions at Workflow.scala:125) | |
2014-10-07 10:24:04,840 INFO scheduler.TaskSchedulerImpl - Adding task set 6.0 with 1 tasks | |
2014-10-07 10:24:04,841 INFO scheduler.TaskSetManager - Starting task 0.0 in stage 6.0 (TID 6, localhost, ANY, 2250 bytes) | |
2014-10-07 10:24:04,842 INFO executor.Executor - Running task 0.0 in stage 6.0 (TID 6) | |
2014-10-07 10:24:04,850 INFO storage.BlockManager - Found block rdd_2_0 locally | |
2014-10-07 10:24:04,934 INFO storage.BlockManager - Found block rdd_2_0 locally | |
2014-10-07 10:24:04,940 INFO scheduler.DAGScheduler - Stage 6 (count at Workflow.scala:526) finished in 0.093 s | |
2014-10-07 10:24:04,941 INFO spark.SparkContext - Job finished: count at Workflow.scala:526, took 0.115969646 s | |
2014-10-07 10:24:04,941 INFO workflow.CoreWorkflow$ - DP 0 has 0 rows | |
2014-10-07 10:24:04,942 INFO workflow.CoreWorkflow$ - Metrics is null. Stop here | |
2014-10-07 10:24:04,944 INFO scheduler.TaskSetManager - Finished task 0.0 in stage 6.0 (TID 6) in 99 ms on localhost (1/1) | |
2014-10-07 10:24:04,946 INFO scheduler.TaskSchedulerImpl - Removed TaskSet 6.0, whose tasks have all completed, from pool | |
2014-10-07 10:24:04,950 INFO executor.Executor - Finished task 0.0 in stage 6.0 (TID 6). 1731 bytes result sent to driver | |
2014-10-07 10:24:04,985 INFO spark.SparkContext - Starting job: collect at Workflow.scala:695 | |
2014-10-07 10:24:04,988 INFO scheduler.DAGScheduler - Registering RDD 16 (coalesce at Workflow.scala:694) | |
2014-10-07 10:24:04,989 INFO scheduler.DAGScheduler - Got job 7 (collect at Workflow.scala:695) with 1 output partitions (allowLocal=false) | |
2014-10-07 10:24:04,989 INFO scheduler.DAGScheduler - Final stage: Stage 7(collect at Workflow.scala:695) | |
2014-10-07 10:24:04,989 INFO scheduler.DAGScheduler - Parents of final stage: List(Stage 8) | |
2014-10-07 10:24:04,991 INFO scheduler.DAGScheduler - Missing parents: List(Stage 8) | |
2014-10-07 10:24:04,994 INFO scheduler.DAGScheduler - Submitting Stage 8 (MapPartitionsRDD[16] at coalesce at Workflow.scala:694), which has no missing parents | |
2014-10-07 10:24:05,003 INFO storage.MemoryStore - ensureFreeSpace(7240) called with curMem=65784, maxMem=280248975 | |
2014-10-07 10:24:05,005 INFO storage.MemoryStore - Block broadcast_7 stored as values in memory (estimated size 7.1 KB, free 267.2 MB) | |
2014-10-07 10:24:05,009 INFO scheduler.DAGScheduler - Submitting 1 missing tasks from Stage 8 (MapPartitionsRDD[16] at coalesce at Workflow.scala:694) | |
2014-10-07 10:24:05,009 INFO scheduler.TaskSchedulerImpl - Adding task set 8.0 with 1 tasks | |
2014-10-07 10:24:05,012 INFO scheduler.TaskSetManager - Starting task 0.0 in stage 8.0 (TID 7, localhost, ANY, 1445 bytes) | |
2014-10-07 10:24:05,012 INFO executor.Executor - Running task 0.0 in stage 8.0 (TID 7) | |
2014-10-07 10:24:05,027 INFO storage.BlockManager - Found block rdd_2_0 locally |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment