c*****d 发帖数: 6045 | 1 mysql> LOAD DATA INFILE 'c:\my documents\student.txt' INTO TABLE pest |
|
t**i 发帖数: 688 | 2 我的数据是tab格式化的,几万列。都是字符变量,尽管是数字构成的。请教如何确保
SAS读入的时候采用字符型变量格式?
下面这个好像不行。
infile “myfile”firstobs=1 lrecl=1000000 truncover;
input v1 - v50000 $ ; |
|
t**i 发帖数: 688 | 3 Example code? Like the following?
data test;
infile 'myfile' firstobs=1 dlm=',' lrecl=10000000 truncover;
length v1 - v50000 :$12 ;
input v1 - v50000;
run; |
|
o******6 发帖数: 538 | 4 ☆─────────────────────────────────────☆
gutenacht (嗯) 于 (Thu Feb 26 19:08:33 2009) 提到:
一直没搞懂,thx!
☆─────────────────────────────────────☆
qqzj (小车车) 于 (Thu Feb 26 19:38:58 2009) 提到:
set if for sas 自己的 data.
infile是external format data.
☆─────────────────────────────────────☆
gutenacht (嗯) 于 (Thu Feb 26 20:27:39 2009) 提到:
o..thanks! |
|
w****u 发帖数: 69 | 5 问题有点傻,大家莫笑。
1. 在看sas的base 认证,有个问题一直困扰我,在引用外部数据的时候,不是可以用
data work.test;
iinfile file1;
那我怎么可以快速的打开file1文件阅读它的内容呢?
2. 为啥有上面这个问题呢,是因为中间有介绍读入变量,可以用pointer control,来
读变量的位置,比如
input @15 JobTitle 这样,我是不是要先打开infile文件看,去数才知道
Jobtitle这个变量从第15列开始?
这个pointer有啥意义呢,我直接input Jobtitle不是也可以读入这个变量么? |
|
p***7 发帖数: 535 | 6 Data facilit;
Infile "/analytics/ft.txt" DLM='|' dsd truncover lrecl=4096;
Input fac_id hosp_name $ long_name $ address1 open_date closed_date ;
run;
Data is below
fac_id | hosp_name|long_name |address1 |open_date |
closed_date
112060360 | mcennter |nald Center| 7989 Linda Vista Rd|10/01/2011|11/01/
2015
Whenever it reads the open_data and closed_date, it says not correct format
for them.
tried informat mmddyy10, but still doesn't work.
Help please!
Thanks |
|
H***a 发帖数: 735 | 7 The EOF is a bit tricky. It's easy to remember that:
** always check right after you attempt to read **
There are two solutions:
1)
...
if ( infile.is_open() )
{
string str("");
getline(infile, str); //read it first!!
while ( infile.good() )
{
lines.push_back(str);
getline( infile, str ); //now get the next line!
}
infile.close();
}
...
2) Simpler way which is recommended
...
if ( infile.is_open() )
{
string str("");
while ( getline(infile, ... 阅读全帖 |
|
H********g 发帖数: 43926 | 8 升级版,可以读取命令行输入了
#csv2xls.pl
#>csv2xls.pl abcd.csv
#generages abcd.xls
use strict;
use warnings;
use Text::CSV;
use Spreadsheet::WriteExcel;
my $infile="test.csv" ;
if($ARGV[0]){$infile=$ARGV[0];}
my $basename=$infile;
$basename=~s/.csv//i;
my $outfile="$basename.xls";
my @rows;
my $csv = Text::CSV->new ( { binary => 1 } ) # should set binary attribute.
or die "Cannot use CSV: ".Text::CSV->error_diag ();
open my $fh, "<:encoding(utf8)", $infile or die "$infile: $!";
my $rowcount=... 阅读全帖 |
|
l****n 发帖数: 55 | 9 Thanks.
I tried
s = infile.readLine()
s = infile.readLine().replaceAll("\\n+", "")
s = infile.readLine().replaceAll("[\n\r\t]+", "")
s = infile.readLine().replaceAll("\r\n|\r|\n", "")
s = infile.readLine().replaceAll("\\n", "")
It's very strange that none of them work. |
|
P**********c 发帖数: 17 | 10 一种方法:
你的文件名没有规律
%macro convert (infilelist=);
%let i=1;
%let infile=%scan(&infilelist,&i);
%do %while ("&infile"~="");
proc export data=temp.&infile
outfile="c:\temp\&infile..dta"
dbms=stata replace;
run;
%let i=%eval(&i+1);
%let infile=%scan(&infilelist,&i);
%end;
%mend convert;
%convert(infilelist=name1 name2 name3 ...);
二种方法:
如果你的文件名有规律,例如 name1 name2 name3 name4 ... name20
你可以再写一个macro,调用你自己写的convert macro
%macro out_to_stata... 阅读全帖 |
|
c*****s 发帖数: 180 | 11 SAS OnlineTutor®: Advanced SAS®
Combining Data Vertically 5 of 28
backnextlesson menuLearning Pathhelp menu
Using an INFILE Statement
You can make the process of concatenating raw data files more flexible by
using an INFILE statement with the FILEVAR= option. The FILEVAR= option
enables you to dynamically change the currently opened input file to a new
input file.
General form, INFILE statement with the FILEVAR= option:
INFILE file-specification FILEVA |
|
b******y 发帖数: 2729 | 12 【 以下文字转载自 JobHunting 讨论区 】
发信人: buddyboy (hello), 信区: JobHunting
标 题: 【请教】fscanf 和 fstream 哪一个更好?
发信站: BBS 未名空间站 (Thu Feb 1 13:39:30 2007)
关于C++里面读写文件有很多做法,
最普遍的两种是:
#include
int mydata;
FILE *infile = fopen("data.dat","r");
fscanf(infile,"%d",&mydata);
或者使用:
#include
using namespace std;
int mydata;
ifstream infile("data.dat");
infile>>mydata;
哪一种方法更好更实用?谢谢! |
|
b******y 发帖数: 2729 | 13 【 以下文字转载自 JobHunting 讨论区 】
发信人: buddyboy (hello), 信区: JobHunting
标 题: 【请教】fscanf 和 fstream 哪一个更好?
发信站: BBS 未名空间站 (Thu Feb 1 13:39:30 2007)
关于C++里面读写文件有很多做法,
最普遍的两种是:
#include
int mydata;
FILE *infile = fopen("data.dat","r");
fscanf(infile,"%d",&mydata);
或者使用:
#include
using namespace std;
int mydata;
ifstream infile("data.dat");
infile>>mydata;
哪一种方法更好更实用?谢谢! |
|
n******1 发帖数: 3756 | 14 【
my $infile = "2.txt";
my $infile2 = "1.txt";
my %MAP;
open(IN, "<$infile") or die "\n\nNADA $infile you FOOL!!!\n\n";
my @DATA = ;
#write in a hash map
foreach my $line (@DATA)
{
chomp($line);
my @data = split('\s', $line);
chomp($data[0]);
chomp($data[1]);
$MAP{$data[0]}=$data[1];
}
#print the hash map
for my $key (keys %MAP){
my $value = $MAP{$key};
print "$key => $value\n";
}
close(IN);
#2. search
open(IN, "<$infile2") or die "\n\nNADA $infile you FO... 阅读全帖 |
|
w********m 发帖数: 1137 | 15 用python吧
空间O(1),时间O(n)
cnt = 0
with open('file.txt', 'r') as infile:
for _ in infile:
cnt += 1
print cnt
空间O(n), 时间O(n/k)
import pyspark
sc = pyspark.SparkContext()
infile = sc.textFile('file.txt')
print infile.count() |
|
c**********e 发帖数: 2007 | 16 这个Strategy design pattern的例子为什么人为得弄得这么复杂?
#include
#include
#include
using namespace std;
class Strategy;
class TestBed
{
public:
enum StrategyType
{
Dummy, Left, Right, Center
};
TestBed()
{
strategy_ = NULL;
}
void setStrategy(int type, int width);
void doIt();
private:
Strategy *strategy_;
};
class Strategy
{
public:
Strategy(int width): width_(width){}
void format()
{
char line[80], wo... 阅读全帖 |
|
j******2 发帖数: 362 | 17 为什么没有signed的问题?
P.366 L10
bitfield[n/8]|=1<<(n%8);
在n<0时就会出错(n是从文件读进的int)
how about this:
void print_missing_one_pass(char *file_name)
{
ifstream infile(file_name);
assert(infile);
int size=0x20000000;
char *flag=new char[size];
memset(flag, 0, size);
int i;
while (infile >> i)
{
int byte=(unsigned)i>>3;
int bit=i&7;
flag[byte]|=1<
}
for (unsigned k=0; k
{
char t=flag[k];
if (t!='\xff')
... 阅读全帖 |
|
B*****g 发帖数: 34098 | 18 你不会python为啥非要用python?
import sys
infile = open(sys.argv[1], 'r')
v_list = []
d_list = []
for line in infile.readlines():
w_list = line.rstrip('n').split(' ')
if w_list[0] == '_a_':
v_list.append(w_list[1])
d_list.append(w_list[2])
infile.close()
outfile = open(sys.argv[2], 'w')
outfile.write(', '.join(v_list) + '\n')
outfile.write(', '.join(d_list) + '\n')
outfile.close() |
|
y****w 发帖数: 3747 | 19 输入输出都是文本,干嘛折腾db?给你个我手头的unpivot shell版
#!/usr/bin/ksh
#echo "only processing file with below format:"
#echo "root v1,v2,v3,v4...."
#echo "root v1 v2 v3 v4...."
#echo '----------------------------------'
infile=$1
[[ ! -f $infile ]] && echo "not a file!" && return -1
tmpf=$(mktemp)
cat $infile |sed '/^$/d' | while read rt val
do
echo "$val" | sed 's/ /\
/g' | sed 's/,/\
/g' > $tmpf
cat $tmpf | sort -n | xargs -i echo "$rt {}"
rm -f $tmpf
done
|
|
w********0 发帖数: 1211 | 20 我在文本文件里存了两个整数,然后用C++读入,但输出来的数不对,这是为什么?
比如,我在"testinput.txt"里敲入17,空格23,我的C++ code如下
int main(){
ifstream infile;
infile.open("testinput.txt");
int i, j;
infile >> i >> j;
cout << i << " and " << j << endl;
return 0;
}
编译能通过,但是输出来的数却是
-858993460 and -858993460
哪位高人给指点一下?谢谢。 |
|
w********0 发帖数: 1211 | 21 我在文本文件里存了两个整数,然后用C++读入,但输出来的数不对,这是为什么?
比如,我在"testinput.txt"里敲入17,空格23,我的C++ code如下
int main(){
ifstream infile;
infile.open("testinput.txt");
int i, j;
infile >> i >> j;
cout << i << " and " << j << endl;
return 0;
}
编译能通过,但是输出来的数却是
-858993460 and -858993460
哪位高人给指点一下?谢谢。 |
|
s****a 发帖数: 238 | 22 ifstream infile;
infile.open("your filename");
isstringstream iss;
string textline;
while(getling(infile,textline)){
iss.clear();
iss.str(textline);
iss>>your container....;
}
我没调试过,你自己试试吧 |
|
g**********y 发帖数: 423 | 23 千老干的活是这样的:
struct TIME
{
int seconds;
int minutes;
int hours;
};
void computeTimeDifference(struct TIME t1, struct TIME t2, struct TIME *
difference){
if(t2.seconds > t1.seconds)
{
--t1.minutes;
t1.seconds += 60;
}
difference->seconds = t1.seconds - t2.seconds;
if(t2.minutes > t1.minutes)
{
--t1.hours;
t1.minutes += 60;
}
difference->minutes = t1.minutes-t2.minutes;
difference->hours = t1.hours-t2.hours;
}
static size_t Wr... 阅读全帖 |
|
d***r 发帖数: 2032 | 24 收到邮件要做这个test,做之前有个sample test,我做了一下发现这个系统下,读入
文件总是出错或者读不进去数据。
比如数据在 STDIN 里:
4
1 2 3 4
我的试验程序如下
#include
#include
#include
#include
using namespace std;
int main() {
ifstream infile("STDIN.txt");
string line;
while (getline(infile, line))
{
stringstream iss(line);
cout<
}
}
总是无法输出,但是同样程序在VS2011就没问题。 请问大牛,这种情况应该如何做才
能读入数据?如果这个问题解决不了,我估计做题肯定通不过。
谢谢 |
|
c*****s 发帖数: 180 | 25 SAS OnlineTutor®: Advanced SAS®
Combining Data Vertically 6 of 28
backnextlesson menuLearning Pathhelp menu
Using an INFILE Statement (continued)
Assigning the Names of the Files to Be Read
The next step is to assign the names of the three files to be read to the
variable nextfile:
data work.quarter;
infile temp filevar=nextfile;
input Flight $ Origin $ Dest $
Date : date9. RevCargo : comma15.2;
In this case, let's use th |
|
A******u 发帖数: 1279 | 26 这个算编程吗?
#!/bin/bash
# Author: Amorphou
# Oct 2010
# grepfwd $pattern $infile $linenumber1 $linenumber2 $outfile
# for each occurence of a pattern, grep forward by $linenumbers
# Without a 5th arg, save to files fwdgrep.$int by default
# defaults
((linenumber1=0))
((linenumber2=0))
FILES="fwdgrep"
SED=/bin/sed
if [ "$#" -lt "2" -o "$#" -gt "5" ]
then
echo "USAGE: $0 pattern infile linenumber1 linenumber2 [optional]outfile"
exit 0
fi
if [ "$#" -ge "3" ]
then
if echo "$3" | grep "^[0-9]*$"... 阅读全帖 |
|
c***u 发帖数: 843 | 27 碰到了一个问题,就是从txt文件中读取数据,数据是两列scientific notation的。用
下面的方法读取,出现了精度丢失方面的问题.
ifstream infile;
.....
....
string line;
long double x;
long double y;
while(getline(infile,line))
{
stringstream stream(line);
stream>>x>>y;
...
...
}
数据是scientific notation的,***********E**,E前面的数字很长,我个人觉得是,
在stream>>x>>y这一步进行string到long double的type conversion的时候,精度丢失
了。 譬如本来的data是0.00179,结果读出来的x变成了0.0018。
对于这个问题应该怎么解决啊。。期待牛人。 |
|
o**********a 发帖数: 330 | 28 LOAD DATA LOCAL INFILE '/path/pet.txt' INTO TABLE pet;
从客户端登陆mysql sever:在mysql sever 里面已经创建好一个表格 student
如何将客户端 f盘下的student.txt的数据导入这个表格里面
看mysql的手册用“LOAD DATA LOCAL INFILE '/path/pet.txt' INTO TABLE pet”这个
语句,但是自己试了几次都没成功
新手,大家多帮助 |
|
发帖数: 1 | 29 文件是json格式,这样写结果还是有问题,应该怎么改?
with open(inputFileName,'rb') as infile:
json_raw=infile.readlines()
json_object=json.loads(json_raw)
for info in json_object:
for attribute, value in info.iteritems():
if(eval(value).isdigit()):
value.replace('"','') |
|
发帖数: 1 | 30 下面这个方法试了试,也不行
i=0
with open(inputFileName,'rb') as infile, open(outputFileName,'wb') as
outfile:
for r in infile.readlines():
if i/2==1:
for var in r.values():
print r.values()
if eval(var).isdigit():
list(var).replace('"','')
i=i+1 |
|
r*****o 发帖数: 28 | 31 What if I want to use the matching variable as an index of an array?
What I wanted to do is:
infile:
SRC_1
SRC_2
SRC_3
SRC_4
SRC_5
change it to outfile:
SRC_2
SRC_3
SRC_4
SRC_1
SRC_5
(Ultimately, I will want to try all the possible sequences of
SRC_1 to SRC_5
So what I did:
set array = (2 3 4 1 5) #this array can be changed by script automatically
sed "/SRC_[1-5]/s/\([1-5]\)/$array[\1]/" infile > outfile
But it doesn't recognize \1 as the index of the array,
anyway to solve it? Thanks.
variab |
|
t***s 发帖数: 30 | 32 My script tries to open a very large file, 3GB in size and fails with an error
like:
a slow, suffocating death: Value too large for defined data type at
../scripts/invcsplit.pl line 156.
Part of the script is:
open (INFILE, "<$filename") or die "a slow, suffocating death: $!";
open (OUTFILE,">".$segment.$filename);
while ()
{
$line=$_;
....
Any suggestion would be really appreciated. Thank you! |
|
p********a 发帖数: 5352 | 33 ☆─────────────────────────────────────☆
pome (穿着一件花衣裳象一朵秋海棠) 于 (Fri Oct 19 13:33:13 2007) 提到:
比较艰难的过了。
1,有一个问为什么要hash object? 以前一直没听说过。。
2,infile时, filevar= ka
3, infile
tnnd, www下发文不让修改?
Advanced techniques
☆─────────────────────────────────────☆
pome (穿着一件花衣裳象一朵秋海棠) 于 (Fri Oct 19 13:34:33 2007) 提到:
以前置顶的130题, 因为重复太多, 大概考了10道。。
☆─────────────────────────────────────☆
pome (穿着一件花衣裳象一朵秋海棠) 于 (Fri Oct 19 13:35:42 2007) 提到:
5个& resolve一个macro varialbe。
☆──────────────────────────── |
|
p********a 发帖数: 5352 | 34 proc import当然不是永远都WORK的,尤其是CSV file,所以有时候必须用INFILE。
你可以先用PROC IMPORT,然后到SAS LOG里面去,COPY 里面SAS自动产生的INFILE
CODE,PASTE到EDIOR里面,改正一些需要改正的FORMAT,INFORMAT就行了,最多也就是
个5分钟的问题。
TIP: COPY 前,可以按住ALT KEY,纵向选择要COPY的CODE,避免把SAS LOG的行号COPY
进去。 |
|
g*******t 发帖数: 124 | 35 Which SAS program correctly reads the data in the raw data file that is
referenced by the fileref Volunteer?
Raw Data File Volunteer 1---+----10---+----20---+----30
ARLENE BIGGERSTAFF 19 UNC 2
JOSEPH CONSTANTINO 21 CLEM 2
MARTIN FIELDS 18 UNCG 1
a. data perm.contest;
infile volunteer;
input FirstName $ LastName $ Age
School $ Class;
run;
b. data perm.contest;
infile volunteer;
length LastName $ 11;
input FirstName $ |
|
p********a 发帖数: 5352 | 36 你的CSV FILE在SQL SERVER上的话,可以用PROC SQL去读
否则的话用INFILE。INFILE可以选择性的读啊,不需要输出那么大个DATA |
|
o****o 发帖数: 8077 | 37 我跑的结果:
NOTE: The infile "c:\test.txt" is:
File Name=c:\test.txt,
RECFM=V,LRECL=256
NOTE: 3 records were read from the infile "c:\test.txt".
The minimum record length was 9.
The maximum record length was 10.
NOTE: The data set WORK.HOMEWORK has 2 observations and 3 variables.
NOTE: DATA statement used (Total process time):
real time 0.01 seconds
cpu time 0.00 seconds
looks like the answer is not correct |
|
w*********e 发帖数: 1 | 38 不知道各位XDJM用SAS有没有碰到LOST CARD的情况?
今天要读进一个csv文件,有将近60个variables:
data;
infile "C:/..../.csv" dsd dlm="," firstobs=2;
input ID $ ......;
run;
结果报告
...LRECL=256
LSOT CARD.
*************
在infile语句加入LRECL=400;貌似能读入所有variables,但是ID却变成数字的形势,
小数点后有两个零。
哪位高手能给答疑?十分感谢! |
|
R******d 发帖数: 1436 | 39 这个宏凑合看看吧
%macro pgxofy(filein=,flag=(Page x of y),append=N,outfile=,ls=132)/des='Page
x of y';
*以&flag为标识算总页数;
data _null_;
retain pg 0;
infile "&filein" end=last;
input;
if index(upcase(_infile_),"&flag") then do;
pg=pg+1;
end;
if last then call symput("totpg",put(pg,4.0));
run;
*将&flag标识替换为“page X of Y”;
data tttttttt;*(keep=text);
length text2 text $200.;
retain pg 0;
infile "&filein";
input;
if index(upcase(_infile_),"&flag") the |
|
w*******t 发帖数: 928 | 40 嗯,练了一下。学习了。
data A;
infile cards;
input id $ date mmddyy10.;
format date mmddyy10.;
cards;
111 10/12/2010
111 05/14/2010
111 01/04/2008
222 05/25/2009
333 02/15/2009
333 03/15/2010
;
run;
data B;
infile cards;
input id $ date mmddyy10.;
format date mmddyy10.;
cards;
111 02/14/2010
222 04/20/2006
333 03/14/2010
; run;
PROC FASTCLUS maxiter=0 seed=B replace=NONE data=A out=AB(drop=cluster)
MAXCLUSTERS=999;
var date;
run;
proc sort data=ab; by id distance; run;
data |
|
s******r 发帖数: 1524 | 41 haha,
you only keep one copy if there are two record in a with same distance.
try following. check 333
data A;
infile cards;
input id $ date mmddyy10.;
format date mmddyy10.;
cards;
111 10/12/2010
111 05/14/2010
111 01/04/2008
222 05/25/2009
333 03/12/2010
333 03/16/2010
;
run;
data B;
infile cards;
input id $ date mmddyy10.;
format date mmddyy10.;
cards;
111 02/14/2010
222 04/20/2006
333 03/14/2010
; run;
PROC FASTCLUS maxiter=0 seed=B replace=NONE data=A out=AB(d |
|
o****o 发帖数: 8077 | 42 switch B and A in seed= and data=
/******************/
data A;
infile cards;
input id $ date mmddyy10.;
format date mmddyy10.;
cards;
111 10/12/2010
111 05/14/2010
111 01/04/2008
222 05/25/2009
333 02/15/2009
333 03/15/2010
;
run;
data B;
infile cards;
input id $ date mmddyy10.;
format date mmddyy10.;
cards;
111 02/14/2010
222 04/20/2006
333 03/14/2010
; run;
data Av/view=Av;
set A; by id;
if first.id then CLUSTER=1; else CLUSTER+1;
run;
%let maxc=3 ; *>=ma |
|
D*A 发帖数: 811 | 43 如果你的txt文件名有规律,比如file_1 到 file_1000,可以用下面的macro。
其中 infile部分,file_1.txt 换成你的文件名。do loop里,num=1 %to 后换成你最
后一个文件的编号,前提是你的文件名字有规律。如果没有规律,就手动复制粘贴文件名替
换那个file_1.txt。从data 开始 run到quit;run; 不用loop开头和结尾。
大牛请帮指正如和整个fold不同名字的run。我看到有相应option,没来得及仔细研究。
我针对的格式是:
Company:000007
F_MEDIA:Stock times
Title:Independent Finance Report
Create Time:20011121
/* 20个包子,(200伪币呦),呵呵 */
/* Run following codes from here */
%macro input_file;
%do num=1 %to 1;
data file_sub;
infile 'C:\Documents and Settings\SHUOY\My Docu |
|
s*****0 发帖数: 357 | 44 SAS wasn't meant to handle this type of long record, and I think that SAS is
not a good option. Run the following perl code in a Unix system and use the
resulted txt file for your SAS.
use strict;
use warnings;
my $filename = "directory/sourcefilename.txt";
my $outfile = "directory/outputfilename.txt";
open (OUTFILE, ">$outfile") or die ("Cannot write to the target file!!!\n");
open (INFILE, $filename) or die ("Cannot open the target file!!!\n");
my $line = ;
my @numList = split (',', $l |
|
S******y 发帖数: 1123 | 45 #Try this in Python
import re
infile = r'H:\ten_bytes_list.txt'
p = re.compile(r'\W+') #Matches any non-alphanumeric character
#this is equivalent to the class [^a-zA-Z0-9_]
f = open(infile, 'r')
ls=[]
for line in f:
my_list = p.split(line)
my_list = [item for item in my_list if item != '']
ls += my_list
print 'num'
for item in ls:
print item
#------------------------------ END -------------------------- |
|
o****o 发帖数: 8077 | 46 if by 'merge' in your OP, you meant concatenation, then you can do similar
things like below:
data _null_;
file '/UNIX/oloolo/test1.txt';
x1=1; x2=2; x3=3; x4=4;
put x1 x2 x3 x4;
file '/UNIX/oloolo/test2.txt';
x1=11; x2=12; x3=13; x4=14;
put x1 x2 x3 x4;
run;
data new;
infile '/UNIX/oloolo/test1.txt';
input x1-x4; output;
infile '/UNIX/oloolo/test2.txt';
input x1-x4; output;
run;
use a macro to wrap all files
if it is merge then the case is a |
|
x***1 发帖数: 22 | 47 31:
Item 31 of 70 Mark item for review
Given the following raw data records in DATAFILE.TXT:
----|----10---|----20---|----30
Kim,Basketball,Golf,Tennis
Bill,Football
Tracy,Soccer,Track
The following program is submitted:
data WORK.SPORTS_INFO;
length Fname Sport1-Sport3 $ 10;
infile 'DATAFILE.TXT' dlm=',';
input Fname $ Sport1 $ Sport2 $ Sport3 $;
run;
proc print data=WORK.SPORTS_INFO;
run;
答案: C.
Obs Fname Sport1 Sport2 Sport3
1 Kim B... 阅读全帖 |
|
d*******o 发帖数: 493 | 48 *****You cannot finish file/infile in a single step*********;
*****A temp file is needed to bridge it*************;
data temp;
infile myfile;
input weight; output;
weight=112; output;
run;
data _null_;
set temp;
file myfile;
put weight;
run;
************NOT TESTED YET***************; |
|
s*********y 发帖数: 34 | 49 Item 6 of 63 Mark item for review
The table WORK.PILOTS contains the following data:
WORK.PILOTS
Id Name Jobcode Salary
--- ------ ------- ------
001 Albert PT1 50000
002 Brenda PT1 70000
003 Carl PT1 60000
004 Donna PT2 80000
005 Edward PT2 90000
006 Flora PT3 100000
The data set was summarized to include average salary based on jobcode:
Jobcode Salary Avg
------- ------ ----... 阅读全帖 |
|